silent data loss with ext4 / all current versions
Hi,
I've been doing some power failure tests (i.e. unexpectedly
interrupting power) a few days ago, and I've discovered a fairly serious
case of silent data loss on ext3/ext4. Initially i thought it's a
filesystem bug, but after further investigation I'm pretty sure it's our
fault.
What happens is that when we recycle WAL segments, we rename them and
then sync them using fdatasync (which is the default on Linux). However
fdatasync does not force fsync on the parent directory, so in case of
power failure the rename may get lost. The recovery won't realize those
segments actually contain changes from "future" and thus does not replay
them. Hence data loss. The recovery completes as if everything went OK,
so the data loss is entirely silent.
Reproducing this is rather trivial. I've prepared a simple C program
simulating our WAL recycling, that I intended to send to ext4 mailing
list to demonstrate the ext4 bug before (I realized it's most likely our
bug and not theirs).
The example program is called ext4-data-loss.c and is available here
(along with other stuff mentioned in this message):
https://github.com/2ndQuadrant/ext4-data-loss
Compile it, run it (over ssh from another host), interrupt the power and
after restart you should see some of the segments be lost (the rename
reverted).
The git repo also contains a bunch of python scripts that I initially
used to reproduce this on PostgreSQL - insert.py, update.py and
xlog-watch.py. I'm not going to explain the details here, it's a bit
more complicated but the cause is exactly the same as with the C
program, just demonstrated in database. See README for instructions.
So, what's going on? The problem is that while the rename() is atomic,
it's not guaranteed to be durable without an explicit fsync on the
parent directory. And by default we only do fdatasync on the recycled
segments, which may not force fsync on the directory (and ext4 does not
do that, apparently).
This impacts all current kernels (tested on 2.6.32.68, 4.0.5 and
4.4-rc1), and also all supported PostgreSQL versions (tested on 9.1.19,
but I believe all versions since spread checkpoints were introduced are
vulnerable).
FWIW this has nothing to do with storage reliability - you may have good
drives, RAID controller with BBU, reliable SSDs or whatever, and you're
still not safe. This issue is at the filesystem level, not storage.
I've done the same tests on xfs and that seems to be safe - I've been
unable to reproduce the issue, so either the issue is not there or it's
more difficult to hit it. I haven't tried on other file systems, because
ext4 and xfs cover vast majority of deployments (at least on Linux), and
thus issue on ext4 is serious enough I believe.
It's possible to make ext3/ext4 safe with respect to this issue by using
full journaling (data=journal) instead of the default (data=ordered)
mode. However this comes at a significant performance cost and pretty
much no one is using it with PostgreSQL because data=ordered is believed
to be safe.
It's also possible to mitigate this by setting wal_sync_method=fsync,
but I don't think I've ever seen that change at a customer. This also
comes with a significant performance penalty, comparable to setting
data=journal. This has the advantage that this can be done without
restarting the database (SIGHUP is enough).
So pretty much everyone running on Linux + ext3/ext4 is vulnerable.
It's also worth mentioning that the data is not actually lost - it's
properly fsynced in the WAL segments, it's just the rename that got
lost. So it's possible to survive this without losing data by manually
renaming the segments, but this must happen before starting the cluster
because the automatic recovery comes and discards all the data etc.
I think this issue might also result in various other issues, not just
data loss. For example, I wouldn't be surprised by data corruption due
to flushing some of the changes in data files to disk (due to contention
for shared buffers and reaching vm.dirty_bytes) and then losing the
matching WAL segment. Also, while I have only seen 1 to 3 segments
getting lost, it might be possible that more segments can get lost,
possibly making the recovery impossible. And of course, this might cause
problems with WAL archiving due to archiving the same
segment twice (before and after crash).
Attached is a proposed fix for this (xlog-fsync.patch), and I'm pretty
sure this needs to be backpatched to all backbranches. I've also
attached a patch that adds pg_current_xlog_flush_location() function,
which proved to be quite useful when debugging this issue.
I'd also like to propose adding "last segment" to pg_controldata, next
to the last checkpoint / restartpoint. We don't need to write this on
every commit, once per segment (on the first write) is enough. This
would make investigating the issue much easier, and it'd also make it
possible to terminate the recovery with an error if the last found
segment does not match the expectation (instead of just assuming we've
found all segments, leading to data loss).
Another useful change would be to allow pg_xlogdump to print segments
even if the contents does not match the filename. Currently it's
impossible to even look at the contents in that case, so renaming the
existing segments is mostly guess work (find segments whrere pg_xlogdump
fails, try renaming to next segments).
And finally, I've done a quick review of all places that might suffer
the same issue - some are not really interesting as the stuff is
ephemeral anyway (like pgstat for example), but there are ~15 places
that may need this fix:
* src/backend/access/transam/timeline.c (2 matches)
* src/backend/access/transam/xlog.c (9 matches)
* src/backend/access/transam/xlogarchive.c (3 matches)
* src/backend/postmaster/pgarch.c (1 match)
Some of these places might be actually safe because a fsync happens
somewhere immediately after the rename (e.g. in a caller), but I guess
better safe than sorry.
I plan to do more power failure testing soon, with more complex test
scenarios. I suspect there might be other similar issues (e.g. when we
rename a file before a checkpoint and don't fsync the directory - then
the rename won't be replayed and will be lost).
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
xlog-fsync.patchtext/x-diff; name=xlog-fsync.patchDownload
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index f17f834..b47c852 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -3282,6 +3282,8 @@ InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
}
#endif
+ fsync_fname("pg_xlog", true);
+
if (use_lock)
LWLockRelease(ControlFileLock);
xlog-flush.patchtext/x-diff; name=xlog-flush.patchDownload
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index f17f834..dec7721 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -10626,6 +10626,19 @@ GetXLogWriteRecPtr(void)
}
/*
+ * Get latest WAL flush pointer
+ */
+XLogRecPtr
+GetXLogFlushRecPtr(void)
+{
+ SpinLockAcquire(&XLogCtl->info_lck);
+ LogwrtResult = XLogCtl->LogwrtResult;
+ SpinLockRelease(&XLogCtl->info_lck);
+
+ return LogwrtResult.Flush;
+}
+
+/*
* Returns the redo pointer of the last checkpoint or restartpoint. This is
* the oldest point in WAL that we still need, if we have to restart recovery.
*/
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index 329bb8c..35c581d 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -195,7 +195,7 @@ pg_current_xlog_location(PG_FUNCTION_ARGS)
}
/*
- * Report the current WAL insert location (same format as pg_start_backup etc)
+ * Report the current WAL flush location (same format as pg_start_backup etc)
*
* This function is mostly for debugging purposes.
*/
@@ -216,6 +216,27 @@ pg_current_xlog_insert_location(PG_FUNCTION_ARGS)
}
/*
+ * Report the current WAL insert location (same format as pg_start_backup etc)
+ *
+ * This function is mostly for debugging purposes.
+ */
+Datum
+pg_current_xlog_flush_location(PG_FUNCTION_ARGS)
+{
+ XLogRecPtr current_recptr;
+
+ if (RecoveryInProgress())
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("recovery is in progress"),
+ errhint("WAL control functions cannot be executed during recovery.")));
+
+ current_recptr = GetXLogFlushRecPtr();
+
+ PG_RETURN_LSN(current_recptr);
+}
+
+/*
* Report the last WAL receive location (same format as pg_start_backup etc)
*
* This is useful for determining how much of WAL is guaranteed to be received
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index 790ca66..985291d 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -235,6 +235,7 @@ extern void GetXLogReceiptTime(TimestampTz *rtime, bool *fromStream);
extern XLogRecPtr GetXLogReplayRecPtr(TimeLineID *replayTLI);
extern XLogRecPtr GetXLogInsertRecPtr(void);
extern XLogRecPtr GetXLogWriteRecPtr(void);
+extern XLogRecPtr GetXLogFlushRecPtr(void);
extern bool RecoveryIsPaused(void);
extern void SetRecoveryPause(bool recoveryPause);
extern TimestampTz GetLatestXTime(void);
diff --git a/src/include/access/xlog_fn.h b/src/include/access/xlog_fn.h
index 3ebe966..f4575d7 100644
--- a/src/include/access/xlog_fn.h
+++ b/src/include/access/xlog_fn.h
@@ -19,6 +19,7 @@ extern Datum pg_switch_xlog(PG_FUNCTION_ARGS);
extern Datum pg_create_restore_point(PG_FUNCTION_ARGS);
extern Datum pg_current_xlog_location(PG_FUNCTION_ARGS);
extern Datum pg_current_xlog_insert_location(PG_FUNCTION_ARGS);
+extern Datum pg_current_xlog_flush_location(PG_FUNCTION_ARGS);
extern Datum pg_last_xlog_receive_location(PG_FUNCTION_ARGS);
extern Datum pg_last_xlog_replay_location(PG_FUNCTION_ARGS);
extern Datum pg_last_xact_replay_timestamp(PG_FUNCTION_ARGS);
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index d8640db..ca8fcd4 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -3154,6 +3154,8 @@ DATA(insert OID = 2849 ( pg_current_xlog_location PGNSP PGUID 12 1 0 0 0 f f f f
DESCR("current xlog write location");
DATA(insert OID = 2852 ( pg_current_xlog_insert_location PGNSP PGUID 12 1 0 0 0 f f f f t f v s 0 0 3220 "" _null_ _null_ _null_ _null_ _null_ pg_current_xlog_insert_location _null_ _null_ _null_ ));
DESCR("current xlog insert location");
+DATA(insert OID = 3330 ( pg_current_xlog_flush_location PGNSP PGUID 12 1 0 0 0 f f f f t f v s 0 0 3220 "" _null_ _null_ _null_ _null_ _null_ pg_current_xlog_flush_location _null_ _null_ _null_ ));
+DESCR("current xlog flush location");
DATA(insert OID = 2850 ( pg_xlogfile_name_offset PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2249 "3220" "{3220,25,23}" "{i,o,o}" "{wal_location,file_name,file_offset}" _null_ _null_ pg_xlogfile_name_offset _null_ _null_ _null_ ));
DESCR("xlog filename and byte offset, given an xlog location");
DATA(insert OID = 2851 ( pg_xlogfile_name PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 25 "3220" _null_ _null_ _null_ _null_ _null_ pg_xlogfile_name _null_ _null_ _null_ ));
What happens is that when we recycle WAL segments, we rename them and then sync
them using fdatasync (which is the default on Linux). However fdatasync does not
force fsync on the parent directory, so in case of power failure the rename may
get lost. The recovery won't realize those segments actually contain changes
Agree. Some time ago I faced with this, although it wasn't a postgres.
So, what's going on? The problem is that while the rename() is atomic, it's not
guaranteed to be durable without an explicit fsync on the parent directory. And
by default we only do fdatasync on the recycled segments, which may not force
fsync on the directory (and ext4 does not do that, apparently).This impacts all current kernels (tested on 2.6.32.68, 4.0.5 and 4.4-rc1), and
also all supported PostgreSQL versions (tested on 9.1.19, but I believe all
versions since spread checkpoints were introduced are vulnerable).FWIW this has nothing to do with storage reliability - you may have good drives,
RAID controller with BBU, reliable SSDs or whatever, and you're still not safe.
This issue is at the filesystem level, not storage.
Agree again.
I plan to do more power failure testing soon, with more complex test scenarios.
I suspect there might be other similar issues (e.g. when we rename a file before
a checkpoint and don't fsync the directory - then the rename won't be replayed
and will be lost).
It would be very useful, but I hope you will not find a new bug :)
--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Nov 27, 2015 at 8:17 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
So, what's going on? The problem is that while the rename() is atomic, it's
not guaranteed to be durable without an explicit fsync on the parent
directory. And by default we only do fdatasync on the recycled segments,
which may not force fsync on the directory (and ext4 does not do that,
apparently).
Yeah, that seems to be the way the POSIX spec clears things.
"If _POSIX_SYNCHRONIZED_IO is defined, the fsync() function shall
force all currently queued I/O operations associated with the file
indicated by file descriptor fildes to the synchronized I/O completion
state. All I/O operations shall be completed as defined for
synchronized I/O file integrity completion."
http://pubs.opengroup.org/onlinepubs/009695399/functions/fsync.html
If I understand that right, it is guaranteed that the rename() will be
atomic, meaning that there will be only one file even if there is a
crash, but that we need to fsync() the parent directory as mentioned.
FWIW this has nothing to do with storage reliability - you may have good
drives, RAID controller with BBU, reliable SSDs or whatever, and you're
still not safe. This issue is at the filesystem level, not storage.
The POSIX spec authorizes this behavior, so the FS is not to blame,
clearly. At least that's what I get from it.
I've done the same tests on xfs and that seems to be safe - I've been unable
to reproduce the issue, so either the issue is not there or it's more
difficult to hit it. I haven't tried on other file systems, because ext4 and
xfs cover vast majority of deployments (at least on Linux), and thus issue
on ext4 is serious enough I believe.So pretty much everyone running on Linux + ext3/ext4 is vulnerable.
It's also worth mentioning that the data is not actually lost - it's
properly fsynced in the WAL segments, it's just the rename that got lost. So
it's possible to survive this without losing data by manually renaming the
segments, but this must happen before starting the cluster because the
automatic recovery comes and discards all the data etc.
Hm. Most users are not going to notice that, particularly where things
are embedded.
I think this issue might also result in various other issues, not just data
loss. For example, I wouldn't be surprised by data corruption due to
flushing some of the changes in data files to disk (due to contention for
shared buffers and reaching vm.dirty_bytes) and then losing the matching WAL
segment. Also, while I have only seen 1 to 3 segments getting lost, it might
be possible that more segments can get lost, possibly making the recovery
impossible. And of course, this might cause problems with WAL archiving due
to archiving the same
segment twice (before and after crash).
Possible, the switch to .done is done after renaming the segment in
xlogarchive.c. So this could happen in theory.
Attached is a proposed fix for this (xlog-fsync.patch), and I'm pretty sure
this needs to be backpatched to all backbranches. I've also attached a patch
that adds pg_current_xlog_flush_location() function, which proved to be
quite useful when debugging this issue.
Agreed. We should be sure as well that the calls to fsync_fname get
issued in a critical section with START/END_CRIT_SECTION(). It does
not seem to be the case with your patch.
And finally, I've done a quick review of all places that might suffer the
same issue - some are not really interesting as the stuff is ephemeral
anyway (like pgstat for example), but there are ~15 places that may need
this fix:* src/backend/access/transam/timeline.c (2 matches)
* src/backend/access/transam/xlog.c (9 matches)
* src/backend/access/transam/xlogarchive.c (3 matches)
* src/backend/postmaster/pgarch.c (1 match)Some of these places might be actually safe because a fsync happens
somewhere immediately after the rename (e.g. in a caller), but I guess
better safe than sorry.
I had a quick look at those code paths and indeed it would be safer to
be sure that once rename() is called we issue those fsync calls.
I plan to do more power failure testing soon, with more complex test
scenarios. I suspect there might be other similar issues (e.g. when we
rename a file before a checkpoint and don't fsync the directory - then the
rename won't be replayed and will be lost).
That would be great.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Nov 27, 2015 at 11:17 AM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
I plan to do more power failure testing soon, with more complex test
scenarios. I suspect there might be other similar issues (e.g. when we
rename a file before a checkpoint and don't fsync the directory - then the
rename won't be replayed and will be lost).
I'm curious how you're doing this testing. The easiest way I can think
of would be to run a database on an LVM volume and take a large number
of LVM snapshots very rapidly and then see if the database can start
up from each snapshot. Bonus points for keeping track of the committed
transactions before each snaphsot and ensuring they're still there I
guess.
That always seemed unsatisfactory because in the past we were mainly
concerned with whether fsync was actually getting propagated to the
physical media. But for testing whether we're fsyncing enough for the
filesystem that would be good enough.
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 11/27/2015 02:28 PM, Greg Stark wrote:
On Fri, Nov 27, 2015 at 11:17 AM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:I plan to do more power failure testing soon, with more complex
test scenarios. I suspect there might be other similar issues (e.g.
when we rename a file before a checkpoint and don't fsync the
directory -then the rename won't be replayed and will be lost).I'm curious how you're doing this testing. The easiest way I can
think of would be to run a database on an LVM volume and take a large
number of LVM snapshots very rapidly and then see if the database can
start up from each snapshot. Bonus points for keeping track of the
committed transactions before each snaphsot and ensuring they're
still there I guess.
I do have reliable storage (Intel SSD with power-loss protection), and
I've connected the system to a sophisticated power-loss-making device
called "the power switch" (image attached).
In other words, in the last ~7 days the system got rebooted more times
than in the previous ~5 years.
That always seemed unsatisfactory because in the past we were mainly
concerned with whether fsync was actually getting propagated to the
physical media. But for testing whether we're fsyncing enough for
the filesystem that would be good enough.
Yeah. I considered some form of virtualized setup initially, but my
original intent was to verify whether disabling write barriers really is
safe (because I've heard numerous complaints that it's stupid). And as
write barriers are more tightly coupled to the hardware, I opted for the
more brutal approach.
But I agree some form of virtualized setup might be more flexible,
although I'm not sure LVM snapshots are good approach as snapshots may
wait for I/O requests to complete and such. I think something qemu might
work better when combined with "kill -9" and I plan to try reproducing
the data loss issue on such setup.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
power-failure-device.jpgimage/jpeg; name=power-failure-device.jpgDownload
���� JFIF ���Exif II* � � LGE � � � ( 1 � i� � Nexus 5 H H Shotwell 0.15.1 �� � �� � '� P � 0220� �
� � � �
� � �� 371 �� 371 �� 371 � 0100� � � � � � � � �� ��;� d � � � d � � R98 0100 �� �http://ns.adobe.com/xap/1.0/ <?xpacket begin="���" id="W5M0MpCehiHzreSzNTczkc9d"?> <x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="XMP Core 4.4.0-Exiv2"> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <rdf:Description rdf:about="" xmlns:exif="http://ns.adobe.com/exif/1.0/" xmlns:tiff="http://ns.adobe.com/tiff/1.0/" exif:PixelXDimension="3264" exif:PixelYDimension="2448" tiff:ImageWidth="3264" tiff:ImageHeight="2448" tiff:Orientation="1"/> </rdf:RDF> </x:xmpmeta> <?xpacket end="w"?>�� C
�� C
�� ��"