[PATCH] Verify Checksums during Basebackups
Hi,
some installations have data which is only rarerly read, and if they are
so large that dumps are not routinely taken, data corruption would only
be detected with some large delay even with checksums enabled.
The attached small patch verifies checksums (in case they are enabled)
during a basebackup. The rationale is that we are reading every block in
this case anyway, so this is a good opportunity to check them as well.
Other and complementary ways of checking the checksums are possible of
course, like the offline checking tool that Magnus just submitted.
It probably makes sense to use the same approach for determining the
segment numbers as Magnus did in his patch, or refactor that out in a
utility function, but I'm sick right now so wanted to submit this for
v11 first.
I did some light benchmarking and it seems that the performance
degradation is minimal, but this could well be platform or
architecture-dependent. Right now, the checksums are always checked but
maybe this could be made optional, probably by extending the replication
protocol.
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB M�nchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 M�nchengladbach
Gesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer
Attachments:
0001-basebackup-verify-checksum.patchtext/x-diff; charset=us-asciiDownload+73-2
On 2/28/18 1:08 PM, Michael Banck wrote:
The attached small patch verifies checksums (in case they are enabled)
during a basebackup. The rationale is that we are reading every block in
this case anyway, so this is a good opportunity to check them as well.
Other and complementary ways of checking the checksums are possible of
course, like the offline checking tool that Magnus just submitted.
+1. I've done some work in this area so I have signed up to review.
--
-David
david@pgmasters.net
On Wed, Feb 28, 2018 at 7:08 PM, Michael Banck <michael.banck@credativ.de>
wrote:
Hi,
some installations have data which is only rarerly read, and if they are
so large that dumps are not routinely taken, data corruption would only
be detected with some large delay even with checksums enabled.
I think this is a very common scenario. Particularly when you take into
account indexes and things like that.
The attached small patch verifies checksums (in case they are enabled)
during a basebackup. The rationale is that we are reading every block in
this case anyway, so this is a good opportunity to check them as well.
Other and complementary ways of checking the checksums are possible of
course, like the offline checking tool that Magnus just submitted.It probably makes sense to use the same approach for determining the
segment numbers as Magnus did in his patch, or refactor that out in a
utility function, but I'm sick right now so wanted to submit this for
v11 first.I did some light benchmarking and it seems that the performance
degradation is minimal, but this could well be platform or
architecture-dependent. Right now, the checksums are always checked but
maybe this could be made optional, probably by extending the replication
protocol.
I think it should be.
I think it would also be a good idea to have this a three-mode setting,
with "no check", "check and warning", "check and error". Where "check and
error" should be the default, but you could turn off that in "save whatever
is left mode". But I think it's better if pg_basebackup simply fails on a
checksum error, because that will make it glaringly obvious that there is a
problem -- which is the main point of checksums in the first place. And
then an option to turn it off completely in cases where performance is the
thing.
Another quick note -- we need to assert that the size of the buffer is
actually divisible by BLCKSZ. I don't think it's a common scenario, but it
could break badly if somebody changes BLCKSZ. Either that or perhaps just
change the TARSENDSIZE to be a multiple of BLCKSZ.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On Fri, Mar 2, 2018 at 6:23 AM, Magnus Hagander <magnus@hagander.net> wrote:
Another quick note -- we need to assert that the size of the buffer is
actually divisible by BLCKSZ. I don't think it's a common scenario, but it
could break badly if somebody changes BLCKSZ. Either that or perhaps just
change the TARSENDSIZE to be a multiple of BLCKSZ.
I think that this patch needs to support all block sizes that are
otherwise supported -- failing an assertion doesn't seem like a
reasonable option, unless it only happens for block sizes we don't
support anyway.
+1 for the feature in general. I think this would help a lot of peple.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, Mar 2, 2018 at 7:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Mar 2, 2018 at 6:23 AM, Magnus Hagander <magnus@hagander.net>
wrote:Another quick note -- we need to assert that the size of the buffer is
actually divisible by BLCKSZ. I don't think it's a common scenario, butit
could break badly if somebody changes BLCKSZ. Either that or perhaps just
change the TARSENDSIZE to be a multiple of BLCKSZ.I think that this patch needs to support all block sizes that are
otherwise supported -- failing an assertion doesn't seem like a
reasonable option, unless it only happens for block sizes we don't
support anyway.
That's not what I meant. What I meant is to fail on an assertion if
TARSENDSIZE is not evenly divisible by BLCKSZ. (Or well, maybe not an
assertion, but an actual compile time error). Since BLCKSZ is changed only
at compile time, we can either trap the case already at compile, or just
define it away. But we should handle it.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
Greetings Magnus, all,
* Magnus Hagander (magnus@hagander.net) wrote:
I think it would also be a good idea to have this a three-mode setting,
with "no check", "check and warning", "check and error". Where "check and
error" should be the default, but you could turn off that in "save whatever
is left mode". But I think it's better if pg_basebackup simply fails on a
checksum error, because that will make it glaringly obvious that there is a
problem -- which is the main point of checksums in the first place. And
then an option to turn it off completely in cases where performance is the
thing.
When we implemented page-level checksum checking in pgBackRest, David
and I had a good long discussion about exactly this question of "warn"
vs. "error" and came to a different conclusion- you want a backup to
always back up as much as it can even in the face of corruption. If the
user has set up their backups in such a way that they don't see the
warnings being thrown, it's a good bet they won't see failed backups
happening either, in which case they might go from having "mostly" good
backups to not having any. Note that I *do* think a checksum failure
should result in an non-zero exit-code result from pg_basebackup,
indicating that there was something which went wrong.
One difference is that with pgBackRest, we manage the backups and a
backup with page-level checksums isn't considered "valid", so we won't
expire old backups if a new backup has a checksum failure, but I'm not
sure that's really enough to change my mind on if pg_basebackup should
outright fail on a checksum error or if it should throw big warnings but
still try to perform the backup. If the admin sets things up in a way
that a warning and error-exit code from pg_basebackup is ignored and
they still expire out their old backups, then even having an actual
error result wouldn't change that.
As an admin, the first thing I would want in a checksum failure scenario
is a backup of everything, even the blocks which failed (and then a
report of which blocks failed...). I'd rather we think about that
use-case than the use-case where the admin sets up backups in such a way
that they don't see warnings being thrown from the backup.
Thanks!
Stephen
On Sun, Mar 4, 2018 at 3:49 PM, Stephen Frost <sfrost@snowman.net> wrote:
Greetings Magnus, all,
* Magnus Hagander (magnus@hagander.net) wrote:
I think it would also be a good idea to have this a three-mode setting,
with "no check", "check and warning", "check and error". Where "check and
error" should be the default, but you could turn off that in "savewhatever
is left mode". But I think it's better if pg_basebackup simply fails on a
checksum error, because that will make it glaringly obvious that thereis a
problem -- which is the main point of checksums in the first place. And
then an option to turn it off completely in cases where performance isthe
thing.
When we implemented page-level checksum checking in pgBackRest, David
and I had a good long discussion about exactly this question of "warn"
vs. "error" and came to a different conclusion- you want a backup to
always back up as much as it can even in the face of corruption. If the
user has set up their backups in such a way that they don't see the
warnings being thrown, it's a good bet they won't see failed backups
happening either, in which case they might go from having "mostly" good
backups to not having any. Note that I *do* think a checksum failure
should result in an non-zero exit-code result from pg_basebackup,
indicating that there was something which went wrong.
I would argue that the likelihood of them seeing an error vs a warning is
orders of magnitude higher.
That said, if we want to exit pg_basebacukp with an exit code but still
complete the backup, that would also be a workable way I guess. But I
strongly feel that we should make pg_basebackup scream at the user and
actually exit with an error -- it's the exit with error that will cause
their cronjobs to fail, and thus somebody notice it.
One difference is that with pgBackRest, we manage the backups and a
backup with page-level checksums isn't considered "valid", so we won't
expire old backups if a new backup has a checksum failure, but I'm not
sure that's really enough to change my mind on if pg_basebackup should
outright fail on a checksum error or if it should throw big warnings but
still try to perform the backup. If the admin sets things up in a way
that a warning and error-exit code from pg_basebackup is ignored and
they still expire out their old backups, then even having an actual
error result wouldn't change that.
There is another important difference as well. In pgBackRest somebody will
have to explicitly enable checksum verification -- which already there
means that they are much more likely to actually check the logs from it.
As an admin, the first thing I would want in a checksum failure scenario
is a backup of everything, even the blocks which failed (and then a
report of which blocks failed...). I'd rather we think about that
use-case than the use-case where the admin sets up backups in such a way
that they don't see warnings being thrown from the backup.
I agree. But I absolutely don't want my backup to be listed as successful,
because then I might expire old ones.
So sure, if we go with WARNING + exit with an errorcode, that is perhaps
the best combination of the two.
That said, it probably still makes sense to implement all modes. Or at
least to implement a "don't bother verifying the checksums" mode. This
mainly controls what the default would be.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
Greetings Magnus, all,
* Magnus Hagander (magnus@hagander.net) wrote:
On Sun, Mar 4, 2018 at 3:49 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Magnus Hagander (magnus@hagander.net) wrote:
I think it would also be a good idea to have this a three-mode setting,
with "no check", "check and warning", "check and error". Where "check and
error" should be the default, but you could turn off that in "savewhatever
is left mode". But I think it's better if pg_basebackup simply fails on a
checksum error, because that will make it glaringly obvious that thereis a
problem -- which is the main point of checksums in the first place. And
then an option to turn it off completely in cases where performance isthe
thing.
When we implemented page-level checksum checking in pgBackRest, David
and I had a good long discussion about exactly this question of "warn"
vs. "error" and came to a different conclusion- you want a backup to
always back up as much as it can even in the face of corruption. If the
user has set up their backups in such a way that they don't see the
warnings being thrown, it's a good bet they won't see failed backups
happening either, in which case they might go from having "mostly" good
backups to not having any. Note that I *do* think a checksum failure
should result in an non-zero exit-code result from pg_basebackup,
indicating that there was something which went wrong.I would argue that the likelihood of them seeing an error vs a warning is
orders of magnitude higher.That said, if we want to exit pg_basebacukp with an exit code but still
complete the backup, that would also be a workable way I guess. But I
strongly feel that we should make pg_basebackup scream at the user and
actually exit with an error -- it's the exit with error that will cause
their cronjobs to fail, and thus somebody notice it.
Yes, we need to have it exit with a non-zero exit code, I definitely
agree with that. Any indication that the backup may not be valid should
do that, imv. I don't believe we should just abort the backup and throw
away whatever effort has gone into getting the data thus far and then
leave an incomplete backup in place- someone might think it's not
incomplete.. I certainly hope you weren't thinking that pg_basebackup
would then go through and remove the backup that it had been running
when it reached the checksum failure- that would be a dangerous and
rarely tested code path, after all.
One difference is that with pgBackRest, we manage the backups and a
backup with page-level checksums isn't considered "valid", so we won't
expire old backups if a new backup has a checksum failure, but I'm not
sure that's really enough to change my mind on if pg_basebackup should
outright fail on a checksum error or if it should throw big warnings but
still try to perform the backup. If the admin sets things up in a way
that a warning and error-exit code from pg_basebackup is ignored and
they still expire out their old backups, then even having an actual
error result wouldn't change that.There is another important difference as well. In pgBackRest somebody will
have to explicitly enable checksum verification -- which already there
means that they are much more likely to actually check the logs from it.
That's actually not correct- we automatically check page-level checksums
when the C library is available (and it's now required as part of 2.0)
and the database has checksums enabled (that's required of both methods,
of course...), so I don't see the difference you're suggesting here.
pgBackRest does have an option to *require* checksum-checking be done,
and one to disable checksumming, but by default it's enabled. See:
https://pgbackrest.org/command.html#command-backup/category-command/option-checksum-page
As an admin, the first thing I would want in a checksum failure scenario
is a backup of everything, even the blocks which failed (and then a
report of which blocks failed...). I'd rather we think about that
use-case than the use-case where the admin sets up backups in such a way
that they don't see warnings being thrown from the backup.I agree. But I absolutely don't want my backup to be listed as successful,
because then I might expire old ones.So sure, if we go with WARNING + exit with an errorcode, that is perhaps
the best combination of the two.
Right, that's what we settled on for pgBackRest also and definitely
makes the most sense to me.
That said, it probably still makes sense to implement all modes. Or at
least to implement a "don't bother verifying the checksums" mode. This
mainly controls what the default would be.
Yes, I'm fine with a "don't verify checksums" option, but I believe the
default should be to verify checksums when the database is configured
with them and, on a checksum failure, throw warnings and exit with an
exit-code that's non-zero and means "page-level checksum verification
failed."
Thanks!
Stephen
Hi,
On Sun, Mar 04, 2018 at 06:19:00PM +0100, Magnus Hagander wrote:
So sure, if we go with WARNING + exit with an errorcode, that is perhaps
the best combination of the two.
I had a look at how to go about this, but it appears to be a bit
complicated; the first problem is that sendFile() and sendDir() don't
have status return codes that could be set on checksum verifcation
failure. So I added a global variable and threw an ereport(ERROR) at the
end of perform_base_backup(), but then I realized that `pg_basebackup'
the client program purges the datadir it created if it gets an error:
|pg_basebackup: final receive failed: ERROR: Checksum mismatch during
|basebackup
|
|pg_basebackup: removing data directory "data2"
So I guess this would have to be sent back via the replication protocol,
but I don't see an off-hand way to do this easily?
Another option would be to see whether it is possible to verify the
checksum on the client side, but then only pg_basebackup (and no other
possible external tools using BASE_BACKUP) would profit.
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB M�nchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 M�nchengladbach
Gesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer
Michael,
* Michael Banck (michael.banck@credativ.de) wrote:
On Sun, Mar 04, 2018 at 06:19:00PM +0100, Magnus Hagander wrote:
So sure, if we go with WARNING + exit with an errorcode, that is perhaps
the best combination of the two.I had a look at how to go about this, but it appears to be a bit
complicated; the first problem is that sendFile() and sendDir() don't
have status return codes that could be set on checksum verifcation
failure. So I added a global variable and threw an ereport(ERROR) at the
end of perform_base_backup(), but then I realized that `pg_basebackup'
the client program purges the datadir it created if it gets an error:|pg_basebackup: final receive failed: ERROR: Checksum mismatch during
|basebackup
|
|pg_basebackup: removing data directory "data2"
Oh, ugh.
So I guess this would have to be sent back via the replication protocol,
but I don't see an off-hand way to do this easily?
The final ordinary result set could be extended to include the
information about checksum failures..? I'm a bit concerned about what
to do when there are a lot of checksum failures though.. Ideally, you'd
identify all of the pages in all of the files where a checksum failed
(just throwing an error such as the one proposed above is really rather
terrible since you have no idea what block, or even what table, failed
the checksum...).
I realize this is moving the goalposts a long way, but I had actually
always envisioned having file-by-file pg_basebackup being put in at some
point, instead of tablespace-by-tablespace, which would allow for both
an ordinary result set being returned for each file that could contain
information such as the checksum failure and what pages failed, and be a
stepping stone for parallel pg_basebackup..
Another option would be to see whether it is possible to verify the
checksum on the client side, but then only pg_basebackup (and no other
possible external tools using BASE_BACKUP) would profit.
Reviewing the original patch and considering this issue, I believe there
may be a larger problem- while very unlikely, there's been concern that
it's possible to read a half-written page (and possibly only the second
half) and end up with a checksum failure due to that. In pgBackRest, we
address that by doing another read of the page and by checking the LSN
vs. where we started the backup (if the LSN is more recent than when the
backup started then we don't have to care about the page- it'll be in
the WAL).
If we're going to solve that issue the same way pgBackRest does, then
you'd really have to do it server-side, I'm afraid.. Either that, or
add a way for the client to request individual blocks be re-sent, but
that would be awful difficult for pg_basebackup to update in the
resulting tar file if it was compressed..
Thanks!
Stephen
Hi,
Am Montag, den 05.03.2018, 06:36 -0500 schrieb Stephen Frost:
Michael,
* Michael Banck (michael.banck@credativ.de) wrote:
On Sun, Mar 04, 2018 at 06:19:00PM +0100, Magnus Hagander wrote:
So sure, if we go with WARNING + exit with an errorcode, that is perhaps
the best combination of the two.I had a look at how to go about this, but it appears to be a bit
complicated; the first problem is that sendFile() and sendDir() don't
have status return codes that could be set on checksum verifcation
failure. So I added a global variable and threw an ereport(ERROR) at the
end of perform_base_backup(), but then I realized that `pg_basebackup'
the client program purges the datadir it created if it gets an error:pg_basebackup: final receive failed: ERROR: Checksum mismatch during
basebackuppg_basebackup: removing data directory "data2"
Oh, ugh.
I came up with the attached patch, which sets a checksum_failure
variable in both basebackup.c and pg_basebackup.c, and emits an ereport
with (for now) ERRCODE_DATA_CORRUPTED at the end of
perform_base_backup(), which gets caught in pg_basebackup and then used
to not cleanup the datadir, but exit with a non-zero exit code.
Does that seem feasible?
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer
Attachments:
checksum_basebackup_error_checking.patchtext/x-patch; charset=UTF-8; name=checksum_basebackup_error_checking.patchDownload+31-6
Michael,
* Michael Banck (michael.banck@credativ.de) wrote:
Am Montag, den 05.03.2018, 06:36 -0500 schrieb Stephen Frost:
* Michael Banck (michael.banck@credativ.de) wrote:
On Sun, Mar 04, 2018 at 06:19:00PM +0100, Magnus Hagander wrote:
So sure, if we go with WARNING + exit with an errorcode, that is perhaps
the best combination of the two.I had a look at how to go about this, but it appears to be a bit
complicated; the first problem is that sendFile() and sendDir() don't
have status return codes that could be set on checksum verifcation
failure. So I added a global variable and threw an ereport(ERROR) at the
end of perform_base_backup(), but then I realized that `pg_basebackup'
the client program purges the datadir it created if it gets an error:pg_basebackup: final receive failed: ERROR: Checksum mismatch during
basebackuppg_basebackup: removing data directory "data2"
Oh, ugh.
I came up with the attached patch, which sets a checksum_failure
variable in both basebackup.c and pg_basebackup.c, and emits an ereport
with (for now) ERRCODE_DATA_CORRUPTED at the end of
perform_base_backup(), which gets caught in pg_basebackup and then used
to not cleanup the datadir, but exit with a non-zero exit code.Does that seem feasible?
Ah, yes, I had thought about using a WARNING or NOTICE or similar also
to pass back the info about the checksum failure during the backup, that
seems like it would work as long as pg_basebackup captures that
information and puts it into a log or on stdout or similar.
I'm a bit on the fence about if we shouldn't just have pg_basebackup
always return a non-zero exit code on a WARNING being seen during the
backup instead. Given that there's a pretty clear SQL code for this
case, perhaps throwing an ERROR and then checking the SQL code isn't
an issue though.
Thanks!
Stephen
Hi Michael,
On 3/5/18 6:36 AM, Stephen Frost wrote:
* Michael Banck (michael.banck@credativ.de) wrote:
So I guess this would have to be sent back via the replication protocol,
but I don't see an off-hand way to do this easily?The final ordinary result set could be extended to include the
information about checksum failures..? I'm a bit concerned about what
to do when there are a lot of checksum failures though.. Ideally, you'd
identify all of the pages in all of the files where a checksum failed
(just throwing an error such as the one proposed above is really rather
terrible since you have no idea what block, or even what table, failed
the checksum...).
I agree that knowing the name of the file that failed validation is
really important, with a list of the pages that failed validation being
a nice thing to have as well, though I would be fine having the latter
added in a future version.
For instance, in pgBackRest we output validation failures this way:
[from a regression test]
WARN: invalid page checksums found in file
[TEST_PATH]/db-primary/db/base/base/32768/33001 at pages 0, 3-5, 7
Note that we collate ranges of errors to keep the output from being too
overwhelming.
I think the file names are very important because there's a rather large
chance that corruption may happen in an index, unlogged table, or
something else that can be rebuilt or reloaded. Knowing where the
corruption is can save a lot of headaches.
Reviewing the original patch and considering this issue, I believe there
may be a larger problem- while very unlikely, there's been concern that
it's possible to read a half-written page (and possibly only the second
half) and end up with a checksum failure due to that. In pgBackRest, we
address that by doing another read of the page and by checking the LSN
vs. where we started the backup (if the LSN is more recent than when the
backup started then we don't have to care about the page- it'll be in
the WAL).
The need to reread pages can be drastically reduced by skipping
validation of any page that has an LSN >= the backup start LSN because
they will be replayed from WAL during recovery.
The rereads are still necessary because of the possible transposition of
page read vs. page write as Stephen notes above. We have not been able
to reproduce this case but can't discount it.
Regards,
--
-David
david@pgmasters.net
Hi,
Am Mittwoch, den 28.02.2018, 19:08 +0100 schrieb Michael Banck:
some installations have data which is only rarerly read, and if they are
so large that dumps are not routinely taken, data corruption would only
be detected with some large delay even with checksums enabled.The attached small patch verifies checksums (in case they are enabled)
during a basebackup. The rationale is that we are reading every block in
this case anyway, so this is a good opportunity to check them as well.
Other and complementary ways of checking the checksums are possible of
course, like the offline checking tool that Magnus just submitted.
I've attached a second version of this patch. Changes are:
1. I've included some code from Magnus' patch, notably the way the
segment numbers are determined and the skipfile() function, along with
the array of files to skip.
2. I am now checking the LSN in the pageheader and compare it against
the LSN of the basebackup start, so that no checksums are verified for
pages changed after basebackup start. I am not sure whether this
addresses all concerns by Stephen and David, as I am not re-reading the
page on a checksum mismatch as they are doing in pgbackrest.
3. pg_basebackup now exits with 1 if a checksum mismatch occured, but it
keeps the data around.
4. I added an Assert() that the TAR_SEND_SIZE is a multiple of BLCKSZ.
AFAICT we support block sizes of 1, 2, 4, 8, 16 and 32 KB, while
TAR_SEND_SIZE is set to 32 KB, so this should be fine unless somebody
mucks around with BLCKSZ manually, in which case the Assert should fire.
I compiled --with-blocksize=32 and checked that this still works as
intended.
5. I also check that the buffer we read is a multiple of BLCKSZ. If that
is not the case I emit a WARNING that the checksum cannot be checked and
pg_basebackup will exit with 1 as well.
This is how it looks like right now from pg_basebackup's POV:
postgres@fock:~$ initdb -k --pgdata=data1 1> /dev/null
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
postgres@fock:~$ pg_ctl --pgdata=data1 --log=pg1.log start
waiting for server to start.... done
server started
postgres@fock:~$ psql -h /tmp -c "SELECT pg_relation_filepath('pg_class')"
pg_relation_filepath
----------------------
base/12367/1259
(1 row)
postgres@fock:~$ echo -n "Bang!" | dd conv=notrunc oflag=seek_bytes seek=4000 bs=9 count=1 of=data1/base/12367/1259
0+1 records in
0+1 records out
5 bytes copied, 3.7487e-05 s, 133 kB/s
postgres@fock:~$ pg_basebackup --pgdata=data2 -h /tmp
WARNING: checksum mismatch in file "./base/12367/1259", segment 0, block 0: expected CC05, found CA4D
pg_basebackup: checksum error occured
postgres@fock:~$ echo $?
1
postgres@fock:~$ ls data2
backup_label pg_dynshmem pg_multixact pg_snapshots pg_tblspc pg_xact
base pg_hba.conf pg_notify pg_stat pg_twophase postgresql.auto.conf
global pg_ident.conf pg_replslot pg_stat_tmp PG_VERSION postgresql.conf
pg_commit_ts pg_logical pg_serial pg_subtrans pg_wal
postgres@fock:~$
Possibly open questions:
1. I have not so far changed the replication protocol to make verifying
checksums optional. I can go about that next if the consensus is that we
need such an option (and cannot just check it everytime)?
2. The isolation tester test uses dd (similar to the above), is that
allowed, or do I have to come up with some internal Perl thing that also
works on Windows?
3. I am using basename() to get the filename, I haven't seen that used a
lot in the codebase (nor did I find an obvious internal implementation),
is that fine?
Cheers,
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer
Attachments:
basebackup-verify-checksum-V2.patchtext/x-patch; charset=UTF-8; name=basebackup-verify-checksum-V2.patchDownload+158-7
Hi,
On Fri, Mar 09, 2018 at 10:35:33PM +0100, Michael Banck wrote:
Possibly open questions:
1. I have not so far changed the replication protocol to make verifying
checksums optional. I can go about that next if the consensus is that we
need such an option (and cannot just check it everytime)?
I think most people (including those I had off-list discussions about
this with) were of the opinion that such an option should be there, so I
added an additional option NOVERIFY_CHECKSUMS to the BASE_BACKUP
replication command and also an option -k / --no-verify-checksums to
pg_basebackup to trigger this.
Updated patch attached.
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB M�nchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 M�nchengladbach
Gesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer
Attachments:
basebackup-verify-checksum-V3.patchtext/x-diff; charset=us-asciiDownload+222-12
On Sat, Mar 17, 2018 at 10:34 PM, Michael Banck <michael.banck@credativ.de>
wrote:
Hi,
On Fri, Mar 09, 2018 at 10:35:33PM +0100, Michael Banck wrote:
Possibly open questions:
1. I have not so far changed the replication protocol to make verifying
checksums optional. I can go about that next if the consensus is that we
need such an option (and cannot just check it everytime)?I think most people (including those I had off-list discussions about
this with) were of the opinion that such an option should be there, so I
added an additional option NOVERIFY_CHECKSUMS to the BASE_BACKUP
replication command and also an option -k / --no-verify-checksums to
pg_basebackup to trigger this.Updated patch attached.
Notes:
+ if (checksum_failure == true)
Really, just if(checksum_failure)
+ errmsg("checksum mismatch during basebackup")));
Should be "base backup" in messages.
+static const char *skip[] = {
I think that needs a much better name than just "skip". Skip for what? In
particular since we are just skipping it for checksums, and not for the
actual basebackup, that name is actively misinforming.
+ filename = basename(pstrdup(readfilename));
+ if (!noverify_checksums && DataChecksumsEnabled() &&
+ !skipfile(filename) &&
+ (strncmp(readfilename, "./global/", 9) == 0 ||
+ strncmp(readfilename, "./base/", 7) == 0 ||
+ strncmp(readfilename, "/", 1) == 0))
+ verify_checksum = true;
I would include the checks for global, base etc into the skipfile()
function as well (also renamed).
+ * Only check pages which have not been modified since the
+ * start of the base backup.
I think this needs a description of why, as well (without having read this
thread, this is a pretty subtle case).
+system_or_bail 'dd', 'conv=notrunc', 'oflag=seek_bytes', 'seek=4000',
'bs=9', 'count=1', 'if=/dev/zero', "of=$pgdata/$pg_class";
This part of the test will surely fail on Windows, not having a /dev/zero.
Can we easily implement this part natively in perl perhaps? Somebody who
knows more about which functionality is OK to use within this system can
perhaps comment?
Most of that stuff is trivial and can be cleaned up at commit time. Do you
want to send an updated patch with a few of those fixes, or should I clean
it?
The test thing is a stopper until we figure that one out though. And while
at it -- it seems we don't have any tests for the checksum feature in
general. It would probably make sense to consider that at the same time as
figuring out the right way to do this one.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
Hi Michael,
On 3/17/18 5:34 PM, Michael Banck wrote:
On Fri, Mar 09, 2018 at 10:35:33PM +0100, Michael Banck wrote:
I think most people (including those I had off-list discussions about
this with) were of the opinion that such an option should be there, so I
added an additional option NOVERIFY_CHECKSUMS to the BASE_BACKUP
replication command and also an option -k / --no-verify-checksums to
pg_basebackup to trigger this.Updated patch attached.
+ memcpy(page, (buf + BLCKSZ * i), BLCKSZ);
Why make a copy here? How about:
char *page = buf + BLCKSZ * i
I know pg_checksum_page manipulates the checksum field but I have found
it to be safe.
+ if (phdr->pd_checksum != checksum)
I've attached a patch that adds basic retry functionality. It's not
terrible efficient since it rereads the entire buffer for any block
error. A better way is to keep a bitmap for each block in the buffer,
then on retry compare bitmaps. If the block is still bad, report it.
If the block was corrected moved on. If a block was good before but is
bad on retry it can be ignored.
+ ereport(WARNING,
+ (errmsg("checksum verification failed in file "
I'm worried about how verbose this warning could be since there are
131,072 blocks per segment. It's unlikely to have that many block
errors, but users do sometimes put files in PGDATA which look like they
should be validated. Since these warnings all go to the server log it
could get pretty bad.
We should either stop warning after the first failure, or aggregate the
failures for a file into a single message.
Some tests with multi-block errors should be added to test these scenarios.
Thanks,
--
-David
david@pgmasters.net
Attachments:
reread.patchtext/plain; charset=UTF-8; name=reread.patch; x-mac-creator=0; x-mac-type=0Download+26-2
Hi Magnus,
thanks a lot for looking at my patch!
Am Donnerstag, den 22.03.2018, 15:07 +0100 schrieb Magnus Hagander:
On Sat, Mar 17, 2018 at 10:34 PM, Michael Banck <michael.banck@credativ.de> wrote:
On Fri, Mar 09, 2018 at 10:35:33PM +0100, Michael Banck wrote:
Possibly open questions:
1. I have not so far changed the replication protocol to make verifying
checksums optional. I can go about that next if the consensus is that we
need such an option (and cannot just check it everytime)?I think most people (including those I had off-list discussions about
this with) were of the opinion that such an option should be there, so I
added an additional option NOVERIFY_CHECKSUMS to the BASE_BACKUP
replication command and also an option -k / --no-verify-checksums to
pg_basebackup to trigger this.Updated patch attached
Notes:
+ if (checksum_failure == true)
Really, just if(checksum_failure)
+ errmsg("checksum mismatch during basebackup")));
Should be "base backup" in messages.
I've changed both.
+static const char *skip[] = {
I think that needs a much better name than just "skip". Skip for what?
In particular since we are just skipping it for checksums, and not for
the actual basebackup, that name is actively misinforming.
I have copied that verbatim from the online checksum patch, but of
course this is in src/backend/replication and not src/bin so warrants
more scrutiny. If you plan to commit both for v11, it might make sense
to have that separated out to a more central place?
But I guess what we mean is a test for "is a heap file". Do you have a
good suggestion where it should end up so that pg_verify_checksums can
use it as well?
In the meantime, I've changed the skip[] array to no_heap_files[] and
the skipfile() function to is_heap_file(), also reversing the logic. If
it helps pg_verify_checksums, we could make is_not_a_heap_file()
instead.
+ filename = basename(pstrdup(readfilename)); + if (!noverify_checksums && DataChecksumsEnabled() && + !skipfile(filename) && + (strncmp(readfilename, "./global/", 9) == 0 || + strncmp(readfilename, "./base/", 7) == 0 || + strncmp(readfilename, "/", 1) == 0)) + verify_checksum = true;I would include the checks for global, base etc into the skipfile()
function as well (also renamed).
Check. I had to change the way (the previous) skipfile() works a bit,
because it was expecting a filename as argument, while we check
pathnames in the above.
+ * Only check pages which have not been modified since the + * start of the base backup.I think this needs a description of why, as well (without having read
this thread, this is a pretty subtle case).
I tried to expand on this some more.
+system_or_bail 'dd', 'conv=notrunc', 'oflag=seek_bytes', 'seek=4000', 'bs=9', 'count=1', 'if=/dev/zero', "of=$pgdata/$pg_class";
This part of the test will surely fail on Windows, not having a
/dev/zero. Can we easily implement this part natively in perl perhaps?
Right, this was one of the open questions. I now came up with a perl 4-
liner that seems to do the trick, but I can't test it on Windows.
Most of that stuff is trivial and can be cleaned up at commit time. Do
you want to send an updated patch with a few of those fixes, or should
I clean it?
I've attached a new patch, but I have not addressed the question whether
skipfile()/is_heap_file() should be moved somewhere else yet.
I found one more cosmetic issue: if there is an external tablespace, and
pg_basebackup encounters corruption, you would get the message
"pg_basebackup: changes to tablespace directories will not be undone"
from cleanup_directories_atexit(), which I now also suppress in case of
checksum failures.
The test thing is a stopper until we figure that one out though. And
while at it -- it seems we don't have any tests for the checksum
feature in general. It would probably make sense to consider that at
the same time as figuring out the right way to do this one.
I don't want to deflect work, but it seems to me the online checksums
patch would be in a better position to generally test checksums while
it's at it. Or did you mean something related to pg_basebackup?
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer
Attachments:
basebackup-verify-checksum-V4.patchtext/x-patch; charset=UTF-8; name=basebackup-verify-checksum-V4.patchDownload+239-14
Hi David,
thanks for the review!
Am Donnerstag, den 22.03.2018, 12:22 -0400 schrieb David Steele:
On 3/17/18 5:34 PM, Michael Banck wrote:
On Fri, Mar 09, 2018 at 10:35:33PM +0100, Michael Banck wrote:
I think most people (including those I had off-list discussions about
this with) were of the opinion that such an option should be there, so I
added an additional option NOVERIFY_CHECKSUMS to the BASE_BACKUP
replication command and also an option -k / --no-verify-checksums to
pg_basebackup to trigger this.Updated patch attached.
+ memcpy(page, (buf + BLCKSZ * i), BLCKSZ);
Why make a copy here? How about:
char *page = buf + BLCKSZ * i
Right, ok.
I know pg_checksum_page manipulates the checksum field but I have found
it to be safe.+ if (phdr->pd_checksum != checksum)
I've attached a patch that adds basic retry functionality. It's not
terrible efficient since it rereads the entire buffer for any block
error. A better way is to keep a bitmap for each block in the buffer,
then on retry compare bitmaps. If the block is still bad, report it.
If the block was corrected moved on. If a block was good before but is
bad on retry it can be ignored.
I have to admit I find it a bit convoluted and non-obvious on first
reading, but I'll try to check it out some more.
I think it would be very useful if we could come up with a testcase
showing this problem, but I guess this will be quite hard to hit
reproducibly, right?
+ ereport(WARNING, + (errmsg("checksum verification failed in file "I'm worried about how verbose this warning could be since there are
131,072 blocks per segment. It's unlikely to have that many block
errors, but users do sometimes put files in PGDATA which look like they
should be validated. Since these warnings all go to the server log it
could get pretty bad.
We only verify checksums of files in tablespaces, and I don't think
dropping random files in those is supported in any way, as opposed to
maybe the top-level PGDATA directory. So I would say that this is not a
real concern.
We should either stop warning after the first failure, or aggregate the
failures for a file into a single message.
I agree that major corruption could make the whole output blow up but I
would prefer to keep this feature simple for now, which implies possibly
printing out a lot of WARNING or maybe just stopping after the first
one (or first few, dunno).
Michael
--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax: +49 2166 9901-100
Email: michael.banck@credativ.de
credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer
Hi Michael,
On 3/23/18 5:36 AM, Michael Banck wrote:
Am Donnerstag, den 22.03.2018, 12:22 -0400 schrieb David Steele:
+ if (phdr->pd_checksum != checksum)
I've attached a patch that adds basic retry functionality. It's not
terrible efficient since it rereads the entire buffer for any block
error. A better way is to keep a bitmap for each block in the buffer,
then on retry compare bitmaps. If the block is still bad, report it.
If the block was corrected moved on. If a block was good before but is
bad on retry it can be ignored.I have to admit I find it a bit convoluted and non-obvious on first
reading, but I'll try to check it out some more.
Yeah, I think I was influenced too much by how pgBackRest does things,
which doesn't work as well here. Attached is a simpler version.
I think it would be very useful if we could come up with a testcase
showing this problem, but I guess this will be quite hard to hit
reproducibly, right?
This was brought up by Robert in [1]/messages/by-id/CA+TgmobHd+-yVJHofSWg=g+=A3EiCN2wsAiEyj7dj1hhevNq9Q@mail.gmail.com when discussing validating
checksums during backup. I don't know of any way to reproduce this
issue but it seems perfectly possible, if highly unlikely.
+ ereport(WARNING, + (errmsg("checksum verification failed in file "I'm worried about how verbose this warning could be since there are
131,072 blocks per segment. It's unlikely to have that many block
errors, but users do sometimes put files in PGDATA which look like they
should be validated. Since these warnings all go to the server log it
could get pretty bad.We only verify checksums of files in tablespaces, and I don't think
dropping random files in those is supported in any way, as opposed to
maybe the top-level PGDATA directory. So I would say that this is not a
real concern.
Perhaps, but a very corrupt file is still going to spew lots of warnings
into the server log.
We should either stop warning after the first failure, or aggregate the
failures for a file into a single message.I agree that major corruption could make the whole output blow up but I
would prefer to keep this feature simple for now, which implies possibly
printing out a lot of WARNING or maybe just stopping after the first
one (or first few, dunno).
In my experience actual block errors are relatively rare, so there
aren't likely to be more than a few in a file. More common are
overwritten or transposed files, rogue files, etc. These produce a lot
of output.
Maybe stop after five?
Regards,
--
-David
david@pgmasters.net
[1]: /messages/by-id/CA+TgmobHd+-yVJHofSWg=g+=A3EiCN2wsAiEyj7dj1hhevNq9Q@mail.gmail.com
/messages/by-id/CA+TgmobHd+-yVJHofSWg=g+=A3EiCN2wsAiEyj7dj1hhevNq9Q@mail.gmail.com