Incorrect checksum in control file with pg_rewind test
Hi, hackers!
I've got test failure for pg_rewind tests and it seems we have
read/write races
for pg_control file. The test error is incorrect checksum in control file.
Build was compiled with -DEXEC_BACKEND flag.
# +++ tap check in src/bin/pg_rewind +++
Bailout called. Further testing stopped: pg_ctl start failed
t/001_basic.pl ...............
Dubious, test returned 255 (wstat 65280, 0xff00)
All 20 subtests passed
2025-05-07 15:00:39.353 MSK [2002308] LOG: starting backup recovery
with redo LSN 0/2000028, checkpoint LSN 0/2000070, on timeline ID 1
2025-05-07 15:00:39.354 MSK [2002307] FATAL: incorrect checksum in
control file
2025-05-07 15:00:39.354 MSK [2002308] LOG: redo starts at 0/2000028
2025-05-07 15:00:39.354 MSK [2002308] LOG: completed backup recovery
with redo LSN 0/2000028 and end LSN 0/2000138
2025-05-07 15:00:39.354 MSK [2002301] LOG: background writer process
(PID 2002307) exited with exit code 1
2025-05-07 15:00:39.354 MSK [2002301] LOG: terminating any other active
server processes
2025-05-07 15:00:39.355 MSK [2002301] LOG: shutting down because
restart_after_crash is off
2025-05-07 15:00:39.356 MSK [2002301] LOG: database system is shut down
# No postmaster PID for node "primary_remote"
[15:00:39.438](0.238s) Bail out! pg_ctl start failed
Failure occurred during restart the primary node to check that rewind
went correctly.
Error is very rare and difficult to reproduce.
It seems we have race between process that replays WAL on start and
update control
file and other sub-processes that read control file and were started
with exec.
As the result sub-processes can read partially updated file with
incorrect crc.
The reason is that LocalProcessControlFile don't acquire ControlFileLock
and it
can't do it.
I found thread
/messages/by-id/20221123014224.xisi44byq3cf5psi@awork3.anarazel.de,
where the similiar issue was discussed for frontend programs. The
decision was
to retry control file read in case of crc failures. Details can be found
in commit
5725e4ebe7a936f724f21e7ee1e84e54a70bfd83. My suggestion is to use this
approach
here. Patch is attached.
Best regards,
Maksim Melnikov
Attachments:
v1-0001-Try-to-handle-torn-reads-of-pg_control-in-sub-pos.patchtext/x-patch; charset=UTF-8; name=v1-0001-Try-to-handle-torn-reads-of-pg_control-in-sub-pos.patchDownload
From c7e55c28bceca7ac3a659860e1f19d5243c1499a Mon Sep 17 00:00:00 2001
From: Maksim Melnikov <m.melnikov@postgrespro.ru>
Date: Thu, 4 Sep 2025 17:37:47 +0300
Subject: [PATCH v1] Try to handle torn reads of pg_control in sub postmaster
processes.
The same problem was fixed in 63a582222c6b3db2b1103ddf67a04b31a8f8e9bb,
but for frontends. Current commit is fixing this problem for cases
when pg_control file is read by fork/exec'd processes.
There can be race between process, that replays WAL on start and
update control file and other sub-processes that read control file
and were started with exec. As the result sub-processes can read
partially updated file with incorrect crc. The reason is that
LocalProcessControlFile don't acquire ControlFileLock and it can't
do it.
Current patch is just copy-paste of changes, applied for frontends,
with little adaptation.
---
src/backend/access/transam/xlog.c | 33 ++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 7ffb2179151..98f992aa812 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -4347,6 +4347,15 @@ ReadControlFile(void)
int fd;
char wal_segsz_str[20];
int r;
+ bool crc_ok;
+#ifdef EXEC_BACKEND
+ pg_crc32c last_crc;
+ int retries = 0;
+
+ INIT_CRC32C(last_crc);
+
+retry:
+#endif
/*
* Read data...
@@ -4411,7 +4420,29 @@ ReadControlFile(void)
offsetof(ControlFileData, crc));
FIN_CRC32C(crc);
- if (!EQ_CRC32C(crc, ControlFile->crc))
+ crc_ok = EQ_CRC32C(crc, ControlFile->crc);
+
+#ifdef EXEC_BACKEND
+
+ /*
+ * If the server was writing at the same time, it is possible that we read
+ * partially updated contents on some systems. If the CRC doesn't match,
+ * retry a limited number of times until we compute the same bad CRC twice
+ * in a row with a short sleep in between. Then the failure is unlikely
+ * to be due to a concurrent write.
+ */
+ if (!crc_ok &&
+ (retries == 0 || !EQ_CRC32C(crc, last_crc)) &&
+ retries < 10)
+ {
+ retries++;
+ last_crc = crc;
+ pg_usleep(10000);
+ goto retry;
+ }
+#endif
+
+ if (!crc_ok)
ereport(FATAL,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("incorrect checksum in control file")));
--
2.43.0
Hi,
just to clarify, it isn't pg_rewind related issue and can fire
spontaneously.
I don't have any strong scenario how to reproduce it, tests sometimes
fired on our local CI, but as you can see on thread [1]/messages/by-id/20221123014224.xisi44byq3cf5psi@awork3.anarazel.de,
where the same issue for frontends was discussed, it is very hard to
reproduce and there wasn't scenario how to do it too.
Some dirty hacks to reproduce it was described here [2]/messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com, and I've tried
it on master branch:
First of all I applied patch
0001-XXX-Dirty-hack-to-clobber-control-file-for-testing.patch from [2]/messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com,
then compile app with
-DEXEC_BACKEND and exec command in psql
do $$ begin loop perform pg_update_control_file(); end loop; end; $$;
Also I've run pgbench command
for run in {1..5000}; do pgbench -c50 -t100 -j6 -S postgres ; done
And eventually got error
2025-11-07 17:58:33.139 MSK [2472504] FATAL: incorrect checksum in
control file
2025-11-07 17:58:33.141 MSK [2472501] LOG: could not receive data from
client: Connection reset by peer
2025-11-07 17:58:33.143 MSK [2472505] LOG: could not send data to
client: Broken pipe
2025-11-07 17:58:33.143 MSK [2472505] FATAL: connection to client lost
Best regards,
Maksim Melnikov
[1]: /messages/by-id/20221123014224.xisi44byq3cf5psi@awork3.anarazel.de
/messages/by-id/20221123014224.xisi44byq3cf5psi@awork3.anarazel.de
[2]: /messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com
/messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com
[3]: /messages/by-id/f59335a4-83ff-438a-a30e-7cf2200276b6@postgrespro.ru
/messages/by-id/f59335a4-83ff-438a-a30e-7cf2200276b6@postgrespro.ru
I can reproduce the bug on the master branch with the following steps:
1. Apply 0001-XXX-Dirty-hack-to-clobber-control-file-for-testing.patch [1]/messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com
2. Compile PostgreSQL with `-DEXEC_BACKEND` C compiler flag option.
3. Run `initdb`
4. Run `postgres`
5. Run `pgbench -i`
6. Run `psql -c 'do $$ begin loop perform pg_update_control_file(); end
loop; end; $$;'`
7. Run `for run in {1..5000}; do pgbench -c50 -t100 -j6 -S postgres; done`
in parallel with the previous command.
8. Wait a while.
[1]: /messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com
/messages/by-id/CA+hUKGK-BEe38aKNqHJDQ86LUW-CMwF5F9bo1JtJVg71FoDv_w@mail.gmail.com