pg_rewind race condition just after promotion

Started by Heikki Linnakangasover 5 years ago9 messageshackers
Jump to latest
#1Heikki Linnakangas
heikki.linnakangas@enterprisedb.com

There's a race condition between the checkpoint at promotion and
pg_rewind. When a server is promoted, the startup process writes an
end-of-recovery checkpoint that includes the new TLI, and the server is
immediate opened for business. The startup process requests the
checkpointer process to perform a checkpoint, but it can take a few
seconds or more to complete. If you run pg_rewind, using the just
promoted server as the source, pg_rewind will think that the server is
still on the old timeline, because it only looks at TLI in the control
file's copy of the checkpoint record. That's not updated until the
checkpoint is finished.

This isn't a new issue. Stephen Frost first reported it back 2015 [1]/messages/by-id/20150428180253.GU30322@tamriel.snowman.net.
Back then, it was deemed just a small annoyance, and we just worked
around it in the tests by issuing a checkpoint command after promotion,
to wait for the checkpoint to finish. I just ran into it again today,
with the new pg_rewind test, and silenced it in the similar way.

I think we should fix this properly. I'm not sure if it can lead to a
broken cluster, but at least it can cause pg_rewind to fail
unnecessarily and in a user-unfriendly way. But this is actually pretty
simple to fix. pg_rewind looks at the control file to find out the
timeline the server is on. When promotion happens, the startup process
updates minRecoveryPoint and minRecoveryPointTLI fields in the control
file. We just need to read it from there. Patch attached.

I think we should also backpatch this. Back in 2015, we decided that we
can live with this, but it's always been a bit bogus, and seems simple
enough to fix.

Thoughts?

[1]: /messages/by-id/20150428180253.GU30322@tamriel.snowman.net
/messages/by-id/20150428180253.GU30322@tamriel.snowman.net

- Heikki

Attachments:

0001-pg_rewind-Fix-determining-TLI-when-server-was-just-p.patchtext/x-patch; charset=UTF-8; name=0001-pg_rewind-Fix-determining-TLI-when-server-was-just-p.patchDownload+33-41
#2Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Heikki Linnakangas (#1)
Re: pg_rewind race condition just after promotion

At Mon, 7 Dec 2020 20:13:25 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in

There's a race condition between the checkpoint at promotion and
pg_rewind. When a server is promoted, the startup process writes an
end-of-recovery checkpoint that includes the new TLI, and the server
is immediate opened for business. The startup process requests the
checkpointer process to perform a checkpoint, but it can take a few
seconds or more to complete. If you run pg_rewind, using the just
promoted server as the source, pg_rewind will think that the server is
still on the old timeline, because it only looks at TLI in the control
file's copy of the checkpoint record. That's not updated until the
checkpoint is finished.

This isn't a new issue. Stephen Frost first reported it back 2015
[1]. Back then, it was deemed just a small annoyance, and we just
worked around it in the tests by issuing a checkpoint command after
promotion, to wait for the checkpoint to finish. I just ran into it
again today, with the new pg_rewind test, and silenced it in the
similar way.

I (or we) faced that and avoided that by checking for history file on
the primary.

I think we should fix this properly. I'm not sure if it can lead to a
broken cluster, but at least it can cause pg_rewind to fail
unnecessarily and in a user-unfriendly way. But this is actually
pretty simple to fix. pg_rewind looks at the control file to find out
the timeline the server is on. When promotion happens, the startup
process updates minRecoveryPoint and minRecoveryPointTLI fields in the
control file. We just need to read it from there. Patch attached.

Looks fine to me. A bit concerned about making sourceHistory
needlessly file-local but on the other hand unifying sourceHistory and
targetHistory looks better.

For the test part, that change doesn't necessariry catch the failure
of the current version, but I *believe* the prevous code is the result
of an actual failure in the past so the test probablistically (or
dependently on platforms?) hits the failure if it happned.

I think we should also backpatch this. Back in 2015, we decided that
we can live with this, but it's always been a bit bogus, and seems
simple enough to fix.

I don't think this changes any successful behavior and it just saves
the failure case so +1 for back-patching.

Thoughts?

[1]
/messages/by-id/20150428180253.GU30322@tamriel.snowman.net

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#3Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Kyotaro Horiguchi (#2)
Re: pg_rewind race condition just after promotion

On 08/12/2020 06:45, Kyotaro Horiguchi wrote:

At Mon, 7 Dec 2020 20:13:25 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in

I think we should fix this properly. I'm not sure if it can lead to a
broken cluster, but at least it can cause pg_rewind to fail
unnecessarily and in a user-unfriendly way. But this is actually
pretty simple to fix. pg_rewind looks at the control file to find out
the timeline the server is on. When promotion happens, the startup
process updates minRecoveryPoint and minRecoveryPointTLI fields in the
control file. We just need to read it from there. Patch attached.

Looks fine to me. A bit concerned about making sourceHistory
needlessly file-local but on the other hand unifying sourceHistory and
targetHistory looks better.

Looking closer, findCommonAncestorTimeline() was freeing sourceHistory,
which was pretty horrible when it's a file-local variable. I changed it
so that both the source and target histories are passed to
findCommonAncestorTimeline() as arguments. That seems more clear.

For the test part, that change doesn't necessariry catch the failure
of the current version, but I *believe* the prevous code is the result
of an actual failure in the past so the test probablistically (or
dependently on platforms?) hits the failure if it happned.

Right. I think the current test coverage is good enough. We've been
bitten by this a few times by now, when we've forgotten to add the
manual checkpoint commands to new tests, and the buildfarm has caught it
pretty quickly.

I think we should also backpatch this. Back in 2015, we decided that
we can live with this, but it's always been a bit bogus, and seems
simple enough to fix.

I don't think this changes any successful behavior and it just saves
the failure case so +1 for back-patching.

Thanks for the review! New patch version attached.

- Heikki

Attachments:

v2-0001-pg_rewind-Fix-determining-TLI-when-server-was-jus.patchtext/x-patch; charset=UTF-8; name=v2-0001-pg_rewind-Fix-determining-TLI-when-server-was-jus.patchDownload+60-55
#4Ibrar Ahmed
ibrar.ahmad@gmail.com
In reply to: Heikki Linnakangas (#3)
Re: pg_rewind race condition just after promotion

On Wed, Dec 9, 2020 at 6:35 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

On 08/12/2020 06:45, Kyotaro Horiguchi wrote:

At Mon, 7 Dec 2020 20:13:25 +0200, Heikki Linnakangas <hlinnaka@iki.fi>

wrote in

I think we should fix this properly. I'm not sure if it can lead to a
broken cluster, but at least it can cause pg_rewind to fail
unnecessarily and in a user-unfriendly way. But this is actually
pretty simple to fix. pg_rewind looks at the control file to find out
the timeline the server is on. When promotion happens, the startup
process updates minRecoveryPoint and minRecoveryPointTLI fields in the
control file. We just need to read it from there. Patch attached.

Looks fine to me. A bit concerned about making sourceHistory
needlessly file-local but on the other hand unifying sourceHistory and
targetHistory looks better.

Looking closer, findCommonAncestorTimeline() was freeing sourceHistory,
which was pretty horrible when it's a file-local variable. I changed it
so that both the source and target histories are passed to
findCommonAncestorTimeline() as arguments. That seems more clear.

For the test part, that change doesn't necessariry catch the failure
of the current version, but I *believe* the prevous code is the result
of an actual failure in the past so the test probablistically (or
dependently on platforms?) hits the failure if it happned.

Right. I think the current test coverage is good enough. We've been
bitten by this a few times by now, when we've forgotten to add the
manual checkpoint commands to new tests, and the buildfarm has caught it
pretty quickly.

I think we should also backpatch this. Back in 2015, we decided that
we can live with this, but it's always been a bit bogus, and seems
simple enough to fix.

I don't think this changes any successful behavior and it just saves
the failure case so +1 for back-patching.

Thanks for the review! New patch version attached.

- Heikki

The patch does not apply successfully

http://cfbot.cputube.org/patch_32_2864.log
1 out of 10 hunks FAILED -- saving rejects to file
src/bin/pg_rewind/pg_rewind.c.rej

There is a minor issue therefore I rebase the patch. Please take a look at
that.

--
Ibrar Ahmed

Attachments:

v3-0001-pg_rewind-Fix-determining-TLI-when-server-was-jus.patchapplication/octet-stream; name=v3-0001-pg_rewind-Fix-determining-TLI-when-server-was-jus.patchDownload+61-54
#5Aleksander Alekseev
aleksander@timescale.com
In reply to: Ibrar Ahmed (#4)
Re: pg_rewind race condition just after promotion

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that
they _don't_ have to call `checkpoint`, or otherwise, we will lose the test
coverage for this scenario. But I don't have a strong opinion on this one.

The new status of this patch is: Ready for Committer

#6Daniel Gustafsson
daniel@yesql.se
In reply to: Aleksander Alekseev (#5)
Re: pg_rewind race condition just after promotion

On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that
they _don't_ have to call `checkpoint`, or otherwise, we will lose the test
coverage for this scenario. But I don't have a strong opinion on this one.

The new status of this patch is: Ready for Committer

Heikki, do you have plans to address this patch during this CF?

--
Daniel Gustafsson https://vmware.com/

#7Ian Lawrence Barwick
barwick@gmail.com
In reply to: Daniel Gustafsson (#6)
Re: pg_rewind race condition just after promotion

2021年11月9日(火) 20:31 Daniel Gustafsson <daniel@yesql.se>:

On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that
they _don't_ have to call `checkpoint`, or otherwise, we will lose the test
coverage for this scenario. But I don't have a strong opinion on this one.

The new status of this patch is: Ready for Committer

Heikki, do you have plans to address this patch during this CF?

Friendly reminder ping one year on; I haven't looked at this patch in
detail but going by the thread contents it seems it should be marked
"Ready for Committer"? Moved to the next CF anyway.

Regards

Ian Barwick

#8Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Ian Lawrence Barwick (#7)
Re: pg_rewind race condition just after promotion

On 11/12/2022 02:01, Ian Lawrence Barwick wrote:

2021年11月9日(火) 20:31 Daniel Gustafsson <daniel@yesql.se>:

On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that
they _don't_ have to call `checkpoint`, or otherwise, we will lose the test
coverage for this scenario. But I don't have a strong opinion on this one.

The new status of this patch is: Ready for Committer

Heikki, do you have plans to address this patch during this CF?

Friendly reminder ping one year on; I haven't looked at this patch in
detail but going by the thread contents it seems it should be marked
"Ready for Committer"? Moved to the next CF anyway.

Here's an updated version of the patch.

I renamed the arguments to findCommonAncestorTimeline() so that the
'targetHistory' argument doesn't shadow the global 'targetHistory'
variable. No other changes, and this still looks good to me, so I'll
wait for the cfbot to run on this and commit in the next few days.

- Heikki

Attachments:

v4-0001-pg_rewind-Fix-determining-TLI-when-server-was-jus.patchtext/x-patch; charset=UTF-8; name=v4-0001-pg_rewind-Fix-determining-TLI-when-server-was-jus.patchDownload+64-60
#9Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Heikki Linnakangas (#8)
Re: pg_rewind race condition just after promotion

On 22/02/2023 16:00, Heikki Linnakangas wrote:

On 11/12/2022 02:01, Ian Lawrence Barwick wrote:

2021年11月9日(火) 20:31 Daniel Gustafsson <daniel@yesql.se>:

On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that
they _don't_ have to call `checkpoint`, or otherwise, we will lose the test
coverage for this scenario. But I don't have a strong opinion on this one.

The new status of this patch is: Ready for Committer

Heikki, do you have plans to address this patch during this CF?

Friendly reminder ping one year on; I haven't looked at this patch in
detail but going by the thread contents it seems it should be marked
"Ready for Committer"? Moved to the next CF anyway.

Here's an updated version of the patch.

I renamed the arguments to findCommonAncestorTimeline() so that the
'targetHistory' argument doesn't shadow the global 'targetHistory'
variable. No other changes, and this still looks good to me, so I'll
wait for the cfbot to run on this and commit in the next few days.

Pushed. I decided not to backpatch this, after all. We haven't really
been treating this as a bug so far, and the patch didn't apply cleanly
to v13 and before.

- Heikki