pg_upgrade and logical replication

Started by Julien Rouhaudabout 3 years ago221 messages
Jump to latest
#1Julien Rouhaud
rjuju123@gmail.com

Hi,

I was working on testing a major upgrade scenario using a mix of physical and
logical replication when I faced some unexpected problem leading to missing
rows. Note that my motivation is to rely on physical replication / physical
backup to avoid recreating a node from scratch using logical replication, as
the initial sync with logical replication is much more costly and impacting
compared to pg_basebackup / restoring a physical backup, but the same problem
exist if you just pg_upgrade a node that has subscriptions.

The problem is that pg_upgrade creates the subscriptions on the newly upgraded
node using "WITH (connect = false)", which seems expected as you obviously
don't want to try to connect to the publisher at that point. But then once the
newly upgraded node is restarted and ready to replace the previous one, unless
I'm missing something there's absolutely no possibility to use the created
subscriptions without losing some data from the publisher.

The reason is that the subscription doesn't have a local list of relation to
process until you refresh the subscription, but you can't refresh the
subscription without enabling it (and you can't enable it in a transaction),
which means that you have to let the logical worker start, consume and ignore
all changes that happened on the publisher side until the refresh happens.

An easy workaround that I tried is to allow something like

ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)

so that the refresh internally happens before the apply worker is started and
you just keep consuming the delta, which works on naive scenario.

One concern I have with this approach is that the default values for both
"refresh" and "copy_data" for all other subcommands is "true, but we would
probably need a different default value in that exact scenario (as we know we
already have the data). I think that it would otherwise be safe in my very
specific scenario, assuming that you created the slot beforehand and moved the
slot's LSN at the promotion point, as even if you add non-empty tables to the
publication you will only need the delta whether those were initially empty or
not given your initial physical replica state. Any other scenario would make
this new option dangerous, if not entirely useless, but not more than any of
the current commands that lead to refreshing a subscription and have the same
options I guess.

All in all, currently the only way to somewhat safely resume logical
replication after a pg_upgrade is to drop all the subscriptions that were
transferred during pg_upgrade on all databases and recreate them (using the
existing slots on the publisher side obviously), allowing the initial
connection. But this approach only works in the exact scenario I mentioned
(physical to logical replication, or at least a case where *all* the tables
where logically replicated prior to the pg_ugprade), otherwise you have to
recreate the follower node from scratch using logical repication.

Is that indeed the current behavior, or did I miss something?

Is this "resume logical replication on pg_upgraded node" something we want to
support better? I was thinking that we could add a new pg_dump mode (maybe
only usable during pg_upgrade) that also restores the pg_subscription_rel
content in each subscription or something like that. If not, should pg_upgrade
keep preserving the subscriptions as it doesn't seem safe to use them, or at
least document the hazards (I didn't find anything about it in the
documentation)?

#2Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#1)
Re: pg_upgrade and logical replication

On Fri, Feb 17, 2023 at 1:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

I was working on testing a major upgrade scenario using a mix of physical and
logical replication when I faced some unexpected problem leading to missing
rows. Note that my motivation is to rely on physical replication / physical
backup to avoid recreating a node from scratch using logical replication, as
the initial sync with logical replication is much more costly and impacting
compared to pg_basebackup / restoring a physical backup, but the same problem
exist if you just pg_upgrade a node that has subscriptions.

The problem is that pg_upgrade creates the subscriptions on the newly upgraded
node using "WITH (connect = false)", which seems expected as you obviously
don't want to try to connect to the publisher at that point. But then once the
newly upgraded node is restarted and ready to replace the previous one, unless
I'm missing something there's absolutely no possibility to use the created
subscriptions without losing some data from the publisher.

The reason is that the subscription doesn't have a local list of relation to
process until you refresh the subscription, but you can't refresh the
subscription without enabling it (and you can't enable it in a transaction),
which means that you have to let the logical worker start, consume and ignore
all changes that happened on the publisher side until the refresh happens.

An easy workaround that I tried is to allow something like

ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)

so that the refresh internally happens before the apply worker is started and
you just keep consuming the delta, which works on naive scenario.

One concern I have with this approach is that the default values for both
"refresh" and "copy_data" for all other subcommands is "true, but we would
probably need a different default value in that exact scenario (as we know we
already have the data). I think that it would otherwise be safe in my very
specific scenario, assuming that you created the slot beforehand and moved the
slot's LSN at the promotion point, as even if you add non-empty tables to the
publication you will only need the delta whether those were initially empty or
not given your initial physical replica state.

This point is not very clear. Why would one just need delta even for new tables?

Any other scenario would make
this new option dangerous, if not entirely useless, but not more than any of
the current commands that lead to refreshing a subscription and have the same
options I guess.

All in all, currently the only way to somewhat safely resume logical
replication after a pg_upgrade is to drop all the subscriptions that were
transferred during pg_upgrade on all databases and recreate them (using the
existing slots on the publisher side obviously), allowing the initial
connection. But this approach only works in the exact scenario I mentioned
(physical to logical replication, or at least a case where *all* the tables
where logically replicated prior to the pg_ugprade), otherwise you have to
recreate the follower node from scratch using logical repication.

I think if you dropped and recreated the subscriptions by retaining
old slots, the replication should resume from where it left off before
the upgrade. Which scenario are you concerned about?

Is that indeed the current behavior, or did I miss something?

Is this "resume logical replication on pg_upgraded node" something we want to
support better? I was thinking that we could add a new pg_dump mode (maybe
only usable during pg_upgrade) that also restores the pg_subscription_rel
content in each subscription or something like that. If not, should pg_upgrade
keep preserving the subscriptions as it doesn't seem safe to use them, or at
least document the hazards (I didn't find anything about it in the
documentation)?

There is a mention of this in pg_dump docs. See [1]https://www.postgresql.org/docs/devel/app-pgdump.html (When dumping
logical replication subscriptions ...)

[1]: https://www.postgresql.org/docs/devel/app-pgdump.html

--
With Regards,
Amit Kapila.

#3Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#2)
Re: pg_upgrade and logical replication

Hi,

On Fri, Feb 17, 2023 at 04:12:54PM +0530, Amit Kapila wrote:

On Fri, Feb 17, 2023 at 1:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

An easy workaround that I tried is to allow something like

ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)

so that the refresh internally happens before the apply worker is started and
you just keep consuming the delta, which works on naive scenario.

One concern I have with this approach is that the default values for both
"refresh" and "copy_data" for all other subcommands is "true, but we would
probably need a different default value in that exact scenario (as we know we
already have the data). I think that it would otherwise be safe in my very
specific scenario, assuming that you created the slot beforehand and moved the
slot's LSN at the promotion point, as even if you add non-empty tables to the
publication you will only need the delta whether those were initially empty or
not given your initial physical replica state.

This point is not very clear. Why would one just need delta even for new tables?

Because in my scenario I'm coming from physical replication, so I know that I
did replicate everything until the promotion LSN. Any table later added in the
publication is either already fully replicated until that LSN on the upgraded
node, so only the delta is needed, or has been created after that LSN. In the
latter case, the entirety of the table will be replicated with the logical
replication as a delta right?

Any other scenario would make
this new option dangerous, if not entirely useless, but not more than any of
the current commands that lead to refreshing a subscription and have the same
options I guess.

All in all, currently the only way to somewhat safely resume logical
replication after a pg_upgrade is to drop all the subscriptions that were
transferred during pg_upgrade on all databases and recreate them (using the
existing slots on the publisher side obviously), allowing the initial
connection. But this approach only works in the exact scenario I mentioned
(physical to logical replication, or at least a case where *all* the tables
where logically replicated prior to the pg_ugprade), otherwise you have to
recreate the follower node from scratch using logical repication.

I think if you dropped and recreated the subscriptions by retaining
old slots, the replication should resume from where it left off before
the upgrade. Which scenario are you concerned about?

I'm concerned about people not coming from physical replication. If you just
had some "normal" logical replication, you can't assume that you already have
all the data from the upstream subscription. If it was modified and a non
empty table is added, you might need to copy the data of part of the tables and
keep replicating for the rest. It's hard to be sure from a user point of view,
and even if you knew you have no way to express it.

Is that indeed the current behavior, or did I miss something?

Is this "resume logical replication on pg_upgraded node" something we want to
support better? I was thinking that we could add a new pg_dump mode (maybe
only usable during pg_upgrade) that also restores the pg_subscription_rel
content in each subscription or something like that. If not, should pg_upgrade
keep preserving the subscriptions as it doesn't seem safe to use them, or at
least document the hazards (I didn't find anything about it in the
documentation)?

There is a mention of this in pg_dump docs. See [1] (When dumping
logical replication subscriptions ...)

Indeed, but it's barely saying "It is then up to the user to reactivate the
subscriptions in a suitable way" and "It might also be appropriate to truncate
the target tables before initiating a new full table copy". As I mentioned, I
don't think there's a suitable way to reactivate the subscription, at least if
you don't want to miss some records, so truncating all target tables is the
only fully safe way to proceed. It seems quite silly to have to do so just
because pg_upgrade doesn't retain the list of relation per subscription.

#4Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#3)
Re: pg_upgrade and logical replication

On Fri, Feb 17, 2023 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Fri, Feb 17, 2023 at 04:12:54PM +0530, Amit Kapila wrote:

On Fri, Feb 17, 2023 at 1:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

An easy workaround that I tried is to allow something like

ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)

so that the refresh internally happens before the apply worker is started and
you just keep consuming the delta, which works on naive scenario.

One concern I have with this approach is that the default values for both
"refresh" and "copy_data" for all other subcommands is "true, but we would
probably need a different default value in that exact scenario (as we know we
already have the data). I think that it would otherwise be safe in my very
specific scenario, assuming that you created the slot beforehand and moved the
slot's LSN at the promotion point, as even if you add non-empty tables to the
publication you will only need the delta whether those were initially empty or
not given your initial physical replica state.

This point is not very clear. Why would one just need delta even for new tables?

Because in my scenario I'm coming from physical replication, so I know that I
did replicate everything until the promotion LSN. Any table later added in the
publication is either already fully replicated until that LSN on the upgraded
node, so only the delta is needed, or has been created after that LSN. In the
latter case, the entirety of the table will be replicated with the logical
replication as a delta right?

That makes sense to me.

Any other scenario would make
this new option dangerous, if not entirely useless, but not more than any of
the current commands that lead to refreshing a subscription and have the same
options I guess.

All in all, currently the only way to somewhat safely resume logical
replication after a pg_upgrade is to drop all the subscriptions that were
transferred during pg_upgrade on all databases and recreate them (using the
existing slots on the publisher side obviously), allowing the initial
connection. But this approach only works in the exact scenario I mentioned
(physical to logical replication, or at least a case where *all* the tables
where logically replicated prior to the pg_ugprade), otherwise you have to
recreate the follower node from scratch using logical repication.

I think if you dropped and recreated the subscriptions by retaining
old slots, the replication should resume from where it left off before
the upgrade. Which scenario are you concerned about?

I'm concerned about people not coming from physical replication. If you just
had some "normal" logical replication, you can't assume that you already have
all the data from the upstream subscription. If it was modified and a non
empty table is added, you might need to copy the data of part of the tables and
keep replicating for the rest. It's hard to be sure from a user point of view,
and even if you knew you have no way to express it.

Can't the user create a separate publication for such newly added
tables and a corresponding new subscription on the downstream node?
Now, I think it would be a bit tricky if the user already has a
publication defined with FOR ALL TABLES. In that case, we probably
need some way to specify FOR ALL TABLES EXCEPT (list of tables) which
we currently don't have.

Is that indeed the current behavior, or did I miss something?

Is this "resume logical replication on pg_upgraded node" something we want to
support better? I was thinking that we could add a new pg_dump mode (maybe
only usable during pg_upgrade) that also restores the pg_subscription_rel
content in each subscription or something like that. If not, should pg_upgrade
keep preserving the subscriptions as it doesn't seem safe to use them, or at
least document the hazards (I didn't find anything about it in the
documentation)?

There is a mention of this in pg_dump docs. See [1] (When dumping
logical replication subscriptions ...)

Indeed, but it's barely saying "It is then up to the user to reactivate the
subscriptions in a suitable way" and "It might also be appropriate to truncate
the target tables before initiating a new full table copy". As I mentioned, I
don't think there's a suitable way to reactivate the subscription, at least if
you don't want to miss some records, so truncating all target tables is the
only fully safe way to proceed. It seems quite silly to have to do so just
because pg_upgrade doesn't retain the list of relation per subscription.

I also don't know if there is any other safe way for newly added
tables apart from the above suggestion to create separate publications
but that can work only in specific cases.

--
With Regards,
Amit Kapila.

#5Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#4)
Re: pg_upgrade and logical replication

On Sat, Feb 18, 2023 at 09:31:30AM +0530, Amit Kapila wrote:

On Fri, Feb 17, 2023 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

I'm concerned about people not coming from physical replication. If you just
had some "normal" logical replication, you can't assume that you already have
all the data from the upstream subscription. If it was modified and a non
empty table is added, you might need to copy the data of part of the tables and
keep replicating for the rest. It's hard to be sure from a user point of view,
and even if you knew you have no way to express it.

Can't the user create a separate publication for such newly added
tables and a corresponding new subscription on the downstream node?

Yes that seems like a safe way to go, but it relies on users being very careful
if they don't want to get corrupted logical standby, and I think it's
impossible to run any check to make sure that the subscription is adequate?

Now, I think it would be a bit tricky if the user already has a
publication defined with FOR ALL TABLES. In that case, we probably
need some way to specify FOR ALL TABLES EXCEPT (list of tables) which
we currently don't have.

Yes, and note that I rely on FOR ALL TABLES for my original physical to logical
use case.

Indeed, but it's barely saying "It is then up to the user to reactivate the
subscriptions in a suitable way" and "It might also be appropriate to truncate
the target tables before initiating a new full table copy". As I mentioned, I
don't think there's a suitable way to reactivate the subscription, at least if
you don't want to miss some records, so truncating all target tables is the
only fully safe way to proceed. It seems quite silly to have to do so just
because pg_upgrade doesn't retain the list of relation per subscription.

I also don't know if there is any other safe way for newly added
tables apart from the above suggestion to create separate publications
but that can work only in specific cases.

I might be missing something, but what could go wrong if pg_upgrade could emit
a bunch of commands like:

ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';

pg_upgrade already preserves the relation's oid, so we could restore the
exact original state and then enabling the subscription would just work?

We could restrict this form to --binary only so we don't provide a way for
users to mess the data.

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#5)
Re: pg_upgrade and logical replication

On Sat, Feb 18, 2023 at 11:21 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Sat, Feb 18, 2023 at 09:31:30AM +0530, Amit Kapila wrote:

On Fri, Feb 17, 2023 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

I'm concerned about people not coming from physical replication. If you just
had some "normal" logical replication, you can't assume that you already have
all the data from the upstream subscription. If it was modified and a non
empty table is added, you might need to copy the data of part of the tables and
keep replicating for the rest. It's hard to be sure from a user point of view,
and even if you knew you have no way to express it.

Can't the user create a separate publication for such newly added
tables and a corresponding new subscription on the downstream node?

Yes that seems like a safe way to go, but it relies on users being very careful
if they don't want to get corrupted logical standby, and I think it's
impossible to run any check to make sure that the subscription is adequate?

I can't think of any straightforward way but one can probably take of
dump of data on both nodes using pg_dump and then compare it.

Now, I think it would be a bit tricky if the user already has a
publication defined with FOR ALL TABLES. In that case, we probably
need some way to specify FOR ALL TABLES EXCEPT (list of tables) which
we currently don't have.

Yes, and note that I rely on FOR ALL TABLES for my original physical to logical
use case.

Okay, but if we would have functionality like EXCEPT (list of tables),
one could do ALTER PUBLICATION .. before doing REFRESH on the
subscriber-side.

Indeed, but it's barely saying "It is then up to the user to reactivate the
subscriptions in a suitable way" and "It might also be appropriate to truncate
the target tables before initiating a new full table copy". As I mentioned, I
don't think there's a suitable way to reactivate the subscription, at least if
you don't want to miss some records, so truncating all target tables is the
only fully safe way to proceed. It seems quite silly to have to do so just
because pg_upgrade doesn't retain the list of relation per subscription.

I also don't know if there is any other safe way for newly added
tables apart from the above suggestion to create separate publications
but that can work only in specific cases.

I might be missing something, but what could go wrong if pg_upgrade could emit
a bunch of commands like:

ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';

How will we know the STATE and LSN of each relation? But I think even
if know that what is the guarantee that publisher side still has still
retained the corresponding slots?

--
With Regards,
Amit Kapila.

#7Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#6)
Re: pg_upgrade and logical replication

On Sat, Feb 18, 2023 at 04:12:52PM +0530, Amit Kapila wrote:

On Sat, Feb 18, 2023 at 11:21 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

Now, I think it would be a bit tricky if the user already has a
publication defined with FOR ALL TABLES. In that case, we probably
need some way to specify FOR ALL TABLES EXCEPT (list of tables) which
we currently don't have.

Yes, and note that I rely on FOR ALL TABLES for my original physical to logical
use case.

Okay, but if we would have functionality like EXCEPT (list of tables),
one could do ALTER PUBLICATION .. before doing REFRESH on the
subscriber-side.

Honestly I'm not a huge fan of this approach. It feels hacky to have such a
feature, and doesn't even solve the problem on its own as you still lose
records when reactivating the subscription unless you also provide an ALTER
SUBSCRIPTION ENABLE WITH (refresh = true, copy_data = false), which will
probably require different defaults than the rest of the ALTER SUBSCRIPTION
subcommands that handle a refresh.

Indeed, but it's barely saying "It is then up to the user to reactivate the
subscriptions in a suitable way" and "It might also be appropriate to truncate
the target tables before initiating a new full table copy". As I mentioned, I
don't think there's a suitable way to reactivate the subscription, at least if
you don't want to miss some records, so truncating all target tables is the
only fully safe way to proceed. It seems quite silly to have to do so just
because pg_upgrade doesn't retain the list of relation per subscription.

I also don't know if there is any other safe way for newly added
tables apart from the above suggestion to create separate publications
but that can work only in specific cases.

I might be missing something, but what could go wrong if pg_upgrade could emit
a bunch of commands like:

ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';

How will we know the STATE and LSN of each relation?

In the pg_subscription_rel catalog of the upgraded server? I didn't look in
detail on how information are updated but I'm assuming that if logical
replication survives after a database restart it shouldn't be a problem to also
fully dump it during pg_upgrade.

But I think even
if know that what is the guarantee that publisher side still has still
retained the corresponding slots?

No guarantee, but if you're just doing a pg_upgrade of a logical replica why
would you drop the replication slot? In any case the warning you mentioned in
pg_dump documentation would still apply and you would have to reenable it as
needed, the only difference is that you would actually be able to keep your
logical replication after a pg_upgrade if you need. If you dropped the
replication slot on the publisher side, then simply remove the publications on
the upgraded node too, or create a new one, exactly as you would do with the
current pg_upgrade workflow.

#8Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#7)
Re: pg_upgrade and logical replication

On Sun, Feb 19, 2023 at 5:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Sat, Feb 18, 2023 at 04:12:52PM +0530, Amit Kapila wrote:

I also don't know if there is any other safe way for newly added
tables apart from the above suggestion to create separate publications
but that can work only in specific cases.

I might be missing something, but what could go wrong if pg_upgrade could emit
a bunch of commands like:

ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';

How will we know the STATE and LSN of each relation?

In the pg_subscription_rel catalog of the upgraded server? I didn't look in
detail on how information are updated but I'm assuming that if logical
replication survives after a database restart it shouldn't be a problem to also
fully dump it during pg_upgrade.

But I think even
if know that what is the guarantee that publisher side still has still
retained the corresponding slots?

No guarantee, but if you're just doing a pg_upgrade of a logical replica why
would you drop the replication slot? In any case the warning you mentioned in
pg_dump documentation would still apply and you would have to reenable it as
needed, the only difference is that you would actually be able to keep your
logical replication after a pg_upgrade if you need. If you dropped the
replication slot on the publisher side, then simply remove the publications on
the upgraded node too, or create a new one, exactly as you would do with the
current pg_upgrade workflow.

I think the current mechanism tries to provide more flexibility to the
users. OTOH, in some of the cases where users don't want to change
anything in the logical replication (both upstream and downstream
function as it is) after the upgrade then they need to do more work. I
think ideally there should be some option in pg_dump that allows us to
dump the contents of pg_subscription_rel as well, so that is easier
for users to continue replication after the upgrade. We can then use
it for binary-upgrade mode as well.

--
With Regards,
Amit Kapila.

#9Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#8)
Re: pg_upgrade and logical replication

On Mon, Feb 20, 2023 at 11:07:42AM +0530, Amit Kapila wrote:

On Sun, Feb 19, 2023 at 5:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

I might be missing something, but what could go wrong if pg_upgrade could emit
a bunch of commands like:

ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';

How will we know the STATE and LSN of each relation?

In the pg_subscription_rel catalog of the upgraded server? I didn't look in
detail on how information are updated but I'm assuming that if logical
replication survives after a database restart it shouldn't be a problem to also
fully dump it during pg_upgrade.

But I think even
if know that what is the guarantee that publisher side still has still
retained the corresponding slots?

No guarantee, but if you're just doing a pg_upgrade of a logical replica why
would you drop the replication slot? In any case the warning you mentioned in
pg_dump documentation would still apply and you would have to reenable it as
needed, the only difference is that you would actually be able to keep your
logical replication after a pg_upgrade if you need. If you dropped the
replication slot on the publisher side, then simply remove the publications on
the upgraded node too, or create a new one, exactly as you would do with the
current pg_upgrade workflow.

I think the current mechanism tries to provide more flexibility to the
users. OTOH, in some of the cases where users don't want to change
anything in the logical replication (both upstream and downstream
function as it is) after the upgrade then they need to do more work. I
think ideally there should be some option in pg_dump that allows us to
dump the contents of pg_subscription_rel as well, so that is easier
for users to continue replication after the upgrade. We can then use
it for binary-upgrade mode as well.

Is there really a use case for dumping the content of pg_subscription_rel
outside of pg_upgrade? I'm not particularly worried about the publisher going
away or changing while pg_upgrade is running , but for a normal pg_dump /
pg_restore I don't really see how anyone would actually want to resume logical
replication from a pg_dump, especially since it's almost guaranteed that the
node will already have consumed data from the publication that won't be in the
dump in the first place.

Are you ok with the suggested syntax above (probably with extra parens to avoid
adding new keywords), or do you have some better suggestion? I'm a bit worried
about adding some O(n) commands, as it can add some noticeable slow-down for
pg_upgrade-ing logical replica, but I don't really see how to avoid that. Note
that if we make this option available to end-users, we will have to use the
relation name rather than its oid, which will make this option even more
expensive when restoring due to the extra lookups.

For the pg_upgrade use-case, do you see any reason to not restore the
pg_subscription_rel by default? Maybe having an option to not restore it would
make sense if it indeed add noticeable overhead when publications have a lot of
tables?

#10Julien Rouhaud
rjuju123@gmail.com
In reply to: Julien Rouhaud (#9)
Re: pg_upgrade and logical replication

On Mon, Feb 20, 2023 at 03:07:37PM +0800, Julien Rouhaud wrote:

On Mon, Feb 20, 2023 at 11:07:42AM +0530, Amit Kapila wrote:

I think the current mechanism tries to provide more flexibility to the
users. OTOH, in some of the cases where users don't want to change
anything in the logical replication (both upstream and downstream
function as it is) after the upgrade then they need to do more work. I
think ideally there should be some option in pg_dump that allows us to
dump the contents of pg_subscription_rel as well, so that is easier
for users to continue replication after the upgrade. We can then use
it for binary-upgrade mode as well.

Is there really a use case for dumping the content of pg_subscription_rel
outside of pg_upgrade? I'm not particularly worried about the publisher going
away or changing while pg_upgrade is running , but for a normal pg_dump /
pg_restore I don't really see how anyone would actually want to resume logical
replication from a pg_dump, especially since it's almost guaranteed that the
node will already have consumed data from the publication that won't be in the
dump in the first place.

Are you ok with the suggested syntax above (probably with extra parens to avoid
adding new keywords), or do you have some better suggestion? I'm a bit worried
about adding some O(n) commands, as it can add some noticeable slow-down for
pg_upgrade-ing logical replica, but I don't really see how to avoid that. Note
that if we make this option available to end-users, we will have to use the
relation name rather than its oid, which will make this option even more
expensive when restoring due to the extra lookups.

For the pg_upgrade use-case, do you see any reason to not restore the
pg_subscription_rel by default? Maybe having an option to not restore it would
make sense if it indeed add noticeable overhead when publications have a lot of
tables?

Since I didn't hear any objection I worked on a POC patch with this approach.

For now when pg_dump is invoked with --binary, it will always emit extra
commands to restore the relation list. This command is only allowed when the
server is started in binary upgrade mode.

The new command is of the form

ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')

with the lsn part being optional. I'm not sure if there should be some new
regression test for that, as it would be a bit costly. Note that pg_upgrade of
a logical replica isn't covered by any regression test that I could find.

I did test it manually though, and it fixes my original problem, allowing me to
safely resume logical replication by just re-enabling it. I didn't do any
benchmarking to see how much overhead it adds.

Attachments:

v1-0001-POC-Preserve-the-subscription-relations-during-pg.patchtext/plain; charset=us-asciiDownload+166-2
#11Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#10)
Re: pg_upgrade and logical replication

On Wed, Feb 22, 2023 at 12:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Mon, Feb 20, 2023 at 03:07:37PM +0800, Julien Rouhaud wrote:

On Mon, Feb 20, 2023 at 11:07:42AM +0530, Amit Kapila wrote:

I think the current mechanism tries to provide more flexibility to the
users. OTOH, in some of the cases where users don't want to change
anything in the logical replication (both upstream and downstream
function as it is) after the upgrade then they need to do more work. I
think ideally there should be some option in pg_dump that allows us to
dump the contents of pg_subscription_rel as well, so that is easier
for users to continue replication after the upgrade. We can then use
it for binary-upgrade mode as well.

Is there really a use case for dumping the content of pg_subscription_rel
outside of pg_upgrade?

I think the users who want to take a dump and restore the entire
cluster may need it there for the same reason as pg_upgrade needs it.
TBH, I have not seen such a request but this is what I imagine one
would expect if we provide this functionality via pg_upgrade.

I'm not particularly worried about the publisher going
away or changing while pg_upgrade is running , but for a normal pg_dump /
pg_restore I don't really see how anyone would actually want to resume logical
replication from a pg_dump, especially since it's almost guaranteed that the
node will already have consumed data from the publication that won't be in the
dump in the first place.

Are you ok with the suggested syntax above (probably with extra parens to avoid
adding new keywords), or do you have some better suggestion? I'm a bit worried
about adding some O(n) commands, as it can add some noticeable slow-down for
pg_upgrade-ing logical replica, but I don't really see how to avoid that. Note
that if we make this option available to end-users, we will have to use the
relation name rather than its oid, which will make this option even more
expensive when restoring due to the extra lookups.

For the pg_upgrade use-case, do you see any reason to not restore the
pg_subscription_rel by default?

As I said earlier, one can very well say that giving more flexibility
(in terms of where the publications will be after restore) after a
restore is a better idea. Also, we are doing the same till now without
any major complaints about the same, so it makes sense to keep the
current behavior as default.

Maybe having an option to not restore it would
make sense if it indeed add noticeable overhead when publications have a lot of
tables?

Yeah, that could be another reason to not do it default.

Since I didn't hear any objection I worked on a POC patch with this approach.

For now when pg_dump is invoked with --binary, it will always emit extra
commands to restore the relation list. This command is only allowed when the
server is started in binary upgrade mode.

The new command is of the form

ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')

with the lsn part being optional.

BTW, do we restore the origin and its LSN after the upgrade? Because
without that this won't be sufficient as that is required for apply
worker to ensure that it is in sync with table sync workers.

--
With Regards,
Amit Kapila.

#12Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#11)
Re: pg_upgrade and logical replication

On Sat, Feb 25, 2023 at 11:24:17AM +0530, Amit Kapila wrote:

On Wed, Feb 22, 2023 at 12:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

Is there really a use case for dumping the content of pg_subscription_rel
outside of pg_upgrade?

I think the users who want to take a dump and restore the entire
cluster may need it there for the same reason as pg_upgrade needs it.
TBH, I have not seen such a request but this is what I imagine one
would expect if we provide this functionality via pg_upgrade.

But the pg_subscription_rel data are only needed if you want to resume logical
replication from the exact previous state, otherwise you can always refresh the
subscription and it will retrieve the list of relations automatically (dealing
with initial sync and so on). It's hard to see how it could be happening with
a plain pg_dump.

The only usable scenario I can see would be to disable all subscriptions on the
logical replica, maybe make sure that no one does any write those tables if you
want to eventually switch over on the restored node, do a pg_dump(all), restore
it and then resume the logical replication / subscription(s) on the restored
server. That's a lot of constraints for something that pg_upgrade deals with
so much more efficiently. Maybe one plausible use case would be to split a
single logical replica to N servers, one per database / publication or
something like that. In that case pg_upgrade won't be that useful and if each
target subset is small enough a pg_dump/pg_restore may be a viable option. But
if that's a viable option then surely creating the logical replica from scratch
using normal logical table sync should be an even better option.

I'm really worried that it's going to be a giant foot-gun that any user should
really avoid.

For the pg_upgrade use-case, do you see any reason to not restore the
pg_subscription_rel by default?

As I said earlier, one can very well say that giving more flexibility
(in terms of where the publications will be after restore) after a
restore is a better idea. Also, we are doing the same till now without
any major complaints about the same, so it makes sense to keep the
current behavior as default.

I'm a bit dubious that anyone actually tried to run pg_upgrade on a logical
replica and then kept using logical replication, as it's currently impossible
to safely resume replication without truncating all target relations.

As I mentioned before, if we keep the current behavior as a default there
should be an explicit warning in the documentation stating that you need to
truncate all target relations before resuming logical replication as otherwise
you have a guarantee that you will lose data.

Maybe having an option to not restore it would
make sense if it indeed add noticeable overhead when publications have a lot of
tables?

Yeah, that could be another reason to not do it default.

I will do some benchmark with various number of relations, from high to
unreasonable.

Since I didn't hear any objection I worked on a POC patch with this approach.

For now when pg_dump is invoked with --binary, it will always emit extra
commands to restore the relation list. This command is only allowed when the
server is started in binary upgrade mode.

The new command is of the form

ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')

with the lsn part being optional.

BTW, do we restore the origin and its LSN after the upgrade? Because
without that this won't be sufficient as that is required for apply
worker to ensure that it is in sync with table sync workers.

We currently don't, which is yet another sign that no one actually tried to
resume logical replication after a pg_upgrade. That being said, trying to
pg_upgrade a node that's currently syncing relations seems like a bad idea
(I didn't even think to try), but I guess it should also be supported. I will
work on that too. Assuming we add a new option for controlling either plain
pg_dump and/or pg_upgrade behavior, should this option control both
pg_subscription_rel and replication origins and their data or do we need more
granularity?

#13Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#12)
Re: pg_upgrade and logical replication

On Sun, Feb 26, 2023 at 8:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Sat, Feb 25, 2023 at 11:24:17AM +0530, Amit Kapila wrote:

The new command is of the form

ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')

with the lsn part being optional.

BTW, do we restore the origin and its LSN after the upgrade? Because
without that this won't be sufficient as that is required for apply
worker to ensure that it is in sync with table sync workers.

We currently don't, which is yet another sign that no one actually tried to
resume logical replication after a pg_upgrade. That being said, trying to
pg_upgrade a node that's currently syncing relations seems like a bad idea
(I didn't even think to try), but I guess it should also be supported. I will
work on that too. Assuming we add a new option for controlling either plain
pg_dump and/or pg_upgrade behavior, should this option control both
pg_subscription_rel and replication origins and their data or do we need more
granularity?

My vote would be to have one option for both. BTW, thinking some more
on this, how will we allow to continue replication after upgrading the
publisher? During upgrade, we don't retain slots, so the replication
won't continue. I think after upgrading subscriber-node, user will
need to upgrade the publisher as well.

--
With Regards,
Amit Kapila.

#14Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#13)
Re: pg_upgrade and logical replication

On Mon, Feb 27, 2023 at 03:39:18PM +0530, Amit Kapila wrote:

BTW, thinking some more
on this, how will we allow to continue replication after upgrading the
publisher? During upgrade, we don't retain slots, so the replication
won't continue. I think after upgrading subscriber-node, user will
need to upgrade the publisher as well.

The scenario I'm interested in is to rely on logical replication only for the
upgrade, so the end state (and start state) is to go back to physical
replication. In that case, I would just create new physical replica from the
pg_upgrade'd server and failover to that node, or rsync the previous publisher
node to make it a physical replica.

But even if you want to only rely on logical replication, I'm not sure why you
would want to keep the publisher node as a publisher node? I think that doing
it this way will lead to a longer downtime compared to doing a failover on the
pg_upgrade'd node, make it a publisher and then move the former publisher node
to a subscriber.

#15Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#14)
Re: pg_upgrade and logical replication

On Tue, Feb 28, 2023 at 7:55 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Mon, Feb 27, 2023 at 03:39:18PM +0530, Amit Kapila wrote:

BTW, thinking some more
on this, how will we allow to continue replication after upgrading the
publisher? During upgrade, we don't retain slots, so the replication
won't continue. I think after upgrading subscriber-node, user will
need to upgrade the publisher as well.

The scenario I'm interested in is to rely on logical replication only for the
upgrade, so the end state (and start state) is to go back to physical
replication. In that case, I would just create new physical replica from the
pg_upgrade'd server and failover to that node, or rsync the previous publisher
node to make it a physical replica.

But even if you want to only rely on logical replication, I'm not sure why you
would want to keep the publisher node as a publisher node? I think that doing
it this way will lead to a longer downtime compared to doing a failover on the
pg_upgrade'd node, make it a publisher and then move the former publisher node
to a subscriber.

I am not sure if this is usually everyone follows because it sounds
like a lot of work to me. IIUC, to achieve this, one needs to recreate
all the publications and subscriptions after changing the roles of
publisher and subscriber. Can you please write steps to show exactly
what you have in mind to avoid any misunderstanding?

--
With Regards,
Amit Kapila.

#16Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#15)
Re: pg_upgrade and logical replication

On Tue, Feb 28, 2023 at 08:56:37AM +0530, Amit Kapila wrote:

On Tue, Feb 28, 2023 at 7:55 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

The scenario I'm interested in is to rely on logical replication only for the
upgrade, so the end state (and start state) is to go back to physical
replication. In that case, I would just create new physical replica from the
pg_upgrade'd server and failover to that node, or rsync the previous publisher
node to make it a physical replica.

But even if you want to only rely on logical replication, I'm not sure why you
would want to keep the publisher node as a publisher node? I think that doing
it this way will lead to a longer downtime compared to doing a failover on the
pg_upgrade'd node, make it a publisher and then move the former publisher node
to a subscriber.

I am not sure if this is usually everyone follows because it sounds
like a lot of work to me. IIUC, to achieve this, one needs to recreate
all the publications and subscriptions after changing the roles of
publisher and subscriber. Can you please write steps to show exactly
what you have in mind to avoid any misunderstanding?

Well, as I mentioned I'm *not* interested in a logical-replication-only
scenario. Logical replication is nice but it will always be less efficient
than physical replication, and some workloads also don't really play well with
it. So while it can be a huge asset in some cases I'm for now looking at
leveraging logical replication for the purpose of major upgrade only for a
physical replication cluster, so the publications and subscriptions are only
temporary and trashed after use.

That being said I was only saying that if I had to do a major upgrade of a
logical replication cluster this is probably how I would try to do it, to
minimize downtime, even if there are probably *a lot* difficulties to
overcome.

#17Nikolay Samokhvalov
samokhvalov@gmail.com
In reply to: Julien Rouhaud (#3)
Re: pg_upgrade and logical replication

On Fri, Feb 17, 2023 at 7:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

Any table later added in the
publication is either already fully replicated until that LSN on the upgraded
node, so only the delta is needed, or has been created after that LSN. In the
latter case, the entirety of the table will be replicated with the logical
replication as a delta right?

What if we consider a slightly adjusted procedure?

0. Temporarily, forbid running any DDL on the source cluster.
1. On the source, create publication, replication slot and remember
the LSN for it
2. Restore the target cluster to that LSN using restore_target_lsn (PITR)
3. Run pg_upgrade on the target cluster
4. Only now, create subscription to target
5. Wait until logical replication catches up
6. Perform a switchover to the new cluster taking care of lags in sequences, etc
7. Resume DDL when needed

Do you see any data loss happening in this approach?

#18Julien Rouhaud
rjuju123@gmail.com
In reply to: Nikolay Samokhvalov (#17)
Re: pg_upgrade and logical replication

On Tue, Feb 28, 2023 at 08:02:13AM -0800, Nikolay Samokhvalov wrote:

On Fri, Feb 17, 2023 at 7:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

Any table later added in the
publication is either already fully replicated until that LSN on the upgraded
node, so only the delta is needed, or has been created after that LSN. In the
latter case, the entirety of the table will be replicated with the logical
replication as a delta right?

What if we consider a slightly adjusted procedure?

0. Temporarily, forbid running any DDL on the source cluster.

This is (at least for me) a non starter, as I want an approach that doesn't
impact the primary node, at least not too much.

Also, how would you do that? If you need some new infrastructure it means that
you can only upgrade nodes starting from pg16+, while my approach can upgrade
any node that supports publications as long as the target version is pg16+.

It also raises some concerns: why prevent any DDL while e.g. creating a
temporary table shouldn't not be a problem, same for renaming some underlying
object, adding indexes... You would have to curate a list of what exactly is
allowed which is never great.

Also, how exactly would you ensure that indeed DDL were forbidden since a long
enough point in time rather than just "currently" forbidden at the time you do
some check?

#19Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#16)
Re: pg_upgrade and logical replication

On Tue, Feb 28, 2023 at 10:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Tue, Feb 28, 2023 at 08:56:37AM +0530, Amit Kapila wrote:

On Tue, Feb 28, 2023 at 7:55 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

The scenario I'm interested in is to rely on logical replication only for the
upgrade, so the end state (and start state) is to go back to physical
replication. In that case, I would just create new physical replica from the
pg_upgrade'd server and failover to that node, or rsync the previous publisher
node to make it a physical replica.

But even if you want to only rely on logical replication, I'm not sure why you
would want to keep the publisher node as a publisher node? I think that doing
it this way will lead to a longer downtime compared to doing a failover on the
pg_upgrade'd node, make it a publisher and then move the former publisher node
to a subscriber.

I am not sure if this is usually everyone follows because it sounds
like a lot of work to me. IIUC, to achieve this, one needs to recreate
all the publications and subscriptions after changing the roles of
publisher and subscriber. Can you please write steps to show exactly
what you have in mind to avoid any misunderstanding?

Well, as I mentioned I'm *not* interested in a logical-replication-only
scenario. Logical replication is nice but it will always be less efficient
than physical replication, and some workloads also don't really play well with
it. So while it can be a huge asset in some cases I'm for now looking at
leveraging logical replication for the purpose of major upgrade only for a
physical replication cluster, so the publications and subscriptions are only
temporary and trashed after use.

That being said I was only saying that if I had to do a major upgrade of a
logical replication cluster this is probably how I would try to do it, to
minimize downtime, even if there are probably *a lot* difficulties to
overcome.

Okay, but it would be better if you list out your detailed steps. It
would be useful to support the new mechanism in this area if others
also find your steps to upgrade useful.

--
With Regards,
Amit Kapila.

#20Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#19)
Re: pg_upgrade and logical replication

On Wed, Mar 01, 2023 at 11:51:49AM +0530, Amit Kapila wrote:

On Tue, Feb 28, 2023 at 10:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

Well, as I mentioned I'm *not* interested in a logical-replication-only
scenario. Logical replication is nice but it will always be less efficient
than physical replication, and some workloads also don't really play well with
it. So while it can be a huge asset in some cases I'm for now looking at
leveraging logical replication for the purpose of major upgrade only for a
physical replication cluster, so the publications and subscriptions are only
temporary and trashed after use.

That being said I was only saying that if I had to do a major upgrade of a
logical replication cluster this is probably how I would try to do it, to
minimize downtime, even if there are probably *a lot* difficulties to
overcome.

Okay, but it would be better if you list out your detailed steps. It
would be useful to support the new mechanism in this area if others
also find your steps to upgrade useful.

Sure. Here are the overly detailed steps:

1) setup a normal physical replication cluster (pg_basebackup, restoring PITR,
whatever), let's call the primary node "A" and replica node "B"
2) ensure WAL level is "logical" on the primary node A
3) create a logical replication slot on every (connectable) database (or just
the one you're interested in if you don't want to preserve everything) on A
4) create a FOR ALL TABLE publication (again for every databases or just the
one you're interested in)
5) wait for replication to be reasonably if not entirely up to date
6) promote the standby node B
7) retrieve the promotion LSN (from the XXXXXXXX.history file,
pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn()...)
8) call pg_replication_slot_advance() with that LSN for all previously created
logical replication slots on A
9) create a normal subscription on all wanted databases on the promoted node
10) wait for it to catchup if needed on B
12) stop the node B
13) run pg_upgrade on B, creating the new node C
14) start C, run the global ANALYZE and any sanity check needed (hopefully you
would have validated that your application is compatible with that new
version before this point)
15) re-enable the subscription on C. This is currently not possible without
losing data, the patch fixes that
16) wait for it to catchup if needed
17) create any missing relation and do the ALTER SUBSCRIPTION ... REFRESH if
needed
18) trash B
19) create new nodes D, E... as physical replica from C if needed, possibly
using cheaper approach like pg_start_backup() / rsync / pg_stop_backup if
needed
20) switchover to C and trash A (or convert it to another replica if you want)
21) trash the publications on C on all databases

As noted the step 15 is currently problematic, and is also problematic in any
variation of that scenario that doesn't require you to entirely recreate the
node C from scratch using logical replication, which is what I want to avoid.

This isn't terribly complicated but requires to be really careful if you don't
want to end up with an incorrect node C. This approach is also currently not
entirely ideal, but hopefully logical replication of sequences and DDL will
remove the main sources of downtime when upgrading using logical replication.

My ultimate goal is to provide some tooling to do that in a much simpler way.
Maybe a new "promote to logical" action that would take care of steps 2 to 9.
Users would therefore only have to do this "promotion to logical", and then run
pg_upgrade and create a new physical replication cluster if they want.

#21Nikolay Samokhvalov
samokhvalov@gmail.com
In reply to: Julien Rouhaud (#18)
#22Julien Rouhaud
rjuju123@gmail.com
In reply to: Nikolay Samokhvalov (#21)
#23Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#20)
#24Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#23)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#24)
#26Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#25)
#27Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#26)
#28Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#27)
#29Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Julien Rouhaud (#20)
#30Julien Rouhaud
rjuju123@gmail.com
In reply to: Masahiko Sawada (#29)
#31Julien Rouhaud
rjuju123@gmail.com
In reply to: Julien Rouhaud (#28)
#32Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Julien Rouhaud (#31)
#33Julien Rouhaud
rjuju123@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#32)
#34Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Julien Rouhaud (#33)
#35Peter Smith
smithpb2250@gmail.com
In reply to: Julien Rouhaud (#33)
#36Julien Rouhaud
rjuju123@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#34)
#37Peter Smith
smithpb2250@gmail.com
In reply to: Julien Rouhaud (#33)
#38Julien Rouhaud
rjuju123@gmail.com
In reply to: Julien Rouhaud (#36)
#39Julien Rouhaud
rjuju123@gmail.com
In reply to: Peter Smith (#35)
#40Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Julien Rouhaud (#36)
#41Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Julien Rouhaud (#33)
#42Julien Rouhaud
rjuju123@gmail.com
In reply to: Peter Smith (#37)
#43Julien Rouhaud
rjuju123@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#40)
#44Julien Rouhaud
rjuju123@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#41)
#45Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Julien Rouhaud (#44)
#46Peter Smith
smithpb2250@gmail.com
In reply to: Julien Rouhaud (#44)
#47Peter Smith
smithpb2250@gmail.com
In reply to: Julien Rouhaud (#42)
#48vignesh C
vignesh21@gmail.com
In reply to: Julien Rouhaud (#44)
#49vignesh C
vignesh21@gmail.com
In reply to: Julien Rouhaud (#44)
#50Michael Paquier
michael@paquier.xyz
In reply to: Peter Smith (#46)
#51Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#50)
#52Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#45)
#53Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#51)
#54Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#51)
#55Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#53)
#56vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#50)
#57Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#55)
#58vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#45)
#59vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#46)
#60vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#54)
#61vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#47)
#62Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#58)
#63Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#58)
#64Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#60)
#65Michael Paquier
michael@paquier.xyz
In reply to: Zhijie Hou (Fujitsu) (#63)
#66vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#62)
#67vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#63)
#68vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#66)
#69Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#68)
#70vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#69)
#71Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#70)
#72Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#66)
#73Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#72)
#74Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#66)
#75Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#72)
#76Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#75)
#77Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#73)
#78Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#77)
#79Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#71)
#80Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#78)
#81Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Michael Paquier (#79)
#82Michael Paquier
michael@paquier.xyz
In reply to: Hayato Kuroda (Fujitsu) (#81)
#83Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#80)
#84Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#83)
#85vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#72)
#86Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Michael Paquier (#82)
#87Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#84)
#88vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#85)
#89Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#88)
#90vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#80)
#91Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#89)
#92Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#87)
#93Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#92)
#94vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#74)
#95Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#94)
#96vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#94)
#97Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#95)
#98Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#96)
#99vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#95)
#100Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#99)
#101Michael Paquier
michael@paquier.xyz
In reply to: Peter Smith (#100)
#102Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#97)
#103vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#98)
#104Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#103)
#105vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#100)
#106Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#104)
#107Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#103)
#108vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#107)
#109vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#104)
#110Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#109)
#111Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#108)
#112Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#109)
#113Michael Paquier
michael@paquier.xyz
In reply to: Peter Smith (#111)
#114vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#113)
#115vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#111)
#116Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#114)
#117Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#114)
#118Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#117)
#119Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#114)
#120Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#119)
#121Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#118)
#122vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#117)
#123Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#121)
#124vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#116)
#125vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#119)
#126vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#120)
#127Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#124)
#128Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#124)
#129vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#114)
#130vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#129)
#131vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#127)
#132vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#128)
#133Michael Paquier
michael@paquier.xyz
In reply to: vignesh C (#130)
#134Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#122)
#135Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#131)
#136Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#134)
#137vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#135)
#138Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#137)
#139Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: Peter Smith (#138)
#140Peter Smith
smithpb2250@gmail.com
In reply to: Shlok Kyal (#139)
#141vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#136)
#142vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#140)
#143Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#141)
#144vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#143)
#145vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#133)
#146Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#145)
#147Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#145)
#148vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#146)
#149Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#148)
#150vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#149)
#151vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#146)
#152vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#147)
#153Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#150)
#154Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#150)
#155Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#154)
#156Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#154)
#157Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#153)
#158vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#153)
#159vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#154)
#160vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#157)
#161Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#158)
#162Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#161)
#163vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#161)
#164Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#163)
#165Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#164)
#166Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#165)
#167Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#164)
#168Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#166)
#169Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#168)
#170Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#169)
#171vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#165)
#172vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#167)
#173Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#171)
#174vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#173)
#175Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#174)
#176vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#175)
#177Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#176)
#178Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#177)
#179Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#178)
#180Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#179)
#181Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#180)
#182Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#181)
#183vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#179)
#184vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#177)
#185Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#181)
#186vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#161)
#187Justin Pryzby
pryzby@telsasoft.com
In reply to: Amit Kapila (#177)
#188Michael Paquier
michael@paquier.xyz
In reply to: Justin Pryzby (#187)
#189Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Justin Pryzby (#187)
#190Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#186)
#191Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#189)
#192Justin Pryzby
pryzby@telsasoft.com
In reply to: Hayato Kuroda (Fujitsu) (#189)
#193Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Amit Kapila (#191)
#194Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Justin Pryzby (#192)
#195vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#193)
#196Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#195)
#197Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#196)
#198vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#196)
#199Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#198)
#200Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#198)
#201vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#200)
#202Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#200)
#203vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#202)
#204Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#203)
#205Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#203)
#206Nathan Bossart
nathandbossart@gmail.com
In reply to: Amit Kapila (#205)
#207Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#206)
#208Michael Paquier
michael@paquier.xyz
In reply to: Nathan Bossart (#207)
#209Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#208)
#210Nathan Bossart
nathandbossart@gmail.com
In reply to: Amit Kapila (#209)
#211Michael Paquier
michael@paquier.xyz
In reply to: Nathan Bossart (#210)
#212Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#211)
#213Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Amit Kapila (#212)
#214Amit Kapila
amit.kapila16@gmail.com
In reply to: Nathan Bossart (#210)
#215Nathan Bossart
nathandbossart@gmail.com
In reply to: Amit Kapila (#214)
#216Amit Kapila
amit.kapila16@gmail.com
In reply to: Nathan Bossart (#215)
#217Nathan Bossart
nathandbossart@gmail.com
In reply to: Amit Kapila (#216)
#218Amit Kapila
amit.kapila16@gmail.com
In reply to: Nathan Bossart (#217)
#219Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#218)
#220Nathan Bossart
nathandbossart@gmail.com
In reply to: Amit Kapila (#219)
#221Michael Paquier
michael@paquier.xyz
In reply to: Nathan Bossart (#220)