patch to allow disable of WAL recycling
Hello All,
Attached is a patch to provide an option to disable WAL recycling. We have
found that this can help performance by eliminating read-modify-write
behavior on old WAL files that are no longer resident in the filesystem
cache. The is a lot more detail on the background of the motivation for
this in the following thread.
/messages/by-id/CACukRjO7DJvub8e2AijOayj8BfKK3XXBTwu3KKARiTr67M3E3w@mail.gmail.com
A similar change has been tested against our 9.6 branch that we're
currently running, but the attached patch is against master.
Thanks,
Jerry
Attachments:
0001-option-to-disable-WAL-recycling.patchapplication/octet-stream; name=0001-option-to-disable-WAL-recycling.patchDownload
From f5ce48e45aa79edc4e61fb6c6128dde6cacbd0c6 Mon Sep 17 00:00:00 2001
From: Jerry Jelinek <jerry.jelinek@joyent.com>
Date: Tue, 26 Jun 2018 11:45:12 +0000
Subject: [PATCH] option to disable WAL recycling
---
doc/src/sgml/config.sgml | 22 ++++++++++++++++++++++
src/backend/access/transam/xlog.c | 3 ++-
src/backend/utils/misc/guc.c | 10 ++++++++++
src/backend/utils/misc/postgresql.conf.sample | 1 +
src/include/access/xlog.h | 1 +
5 files changed, 36 insertions(+), 1 deletion(-)
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 7bfbc87..457db77 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3116,6 +3116,28 @@ include_dir 'conf.d'
</listitem>
</varlistentry>
+ <varlistentry id="guc-wal-recycle" xreflabel="wal_recycle">
+ <term><varname>wal_recycle</varname> (<type>boolean</type>)
+ <indexterm>
+ <primary><varname>wal_recycle</varname> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ When this parameter is <literal>on</literal>, past log file segments
+ in the <filename>pg_wal</filename> directory are recycled for future
+ use.
+ </para>
+
+ <para>
+ Turning this parameter off causes past log files segments to be deleted
+ when no longer needed. This can improve performance by eliminating
+ read-modify-write operations on old files which are no longer in the
+ filesystem cache.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="guc-wal-sender-timeout" xreflabel="wal_sender_timeout">
<term><varname>wal_sender_timeout</varname> (<type>integer</type>)
<indexterm>
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 1a419aa..74427c5 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -99,6 +99,7 @@ bool wal_log_hints = false;
bool wal_compression = false;
char *wal_consistency_checking_string = NULL;
bool *wal_consistency_checking = NULL;
+bool wal_recycle = true;
bool log_checkpoints = false;
int sync_method = DEFAULT_SYNC_METHOD;
int wal_level = WAL_LEVEL_MINIMAL;
@@ -4012,7 +4013,7 @@ RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
* segment. Only recycle normal files, pg_standby for example can create
* symbolic links pointing to a separate archive directory.
*/
- if (endlogSegNo <= recycleSegNo &&
+ if (wal_recycle && endlogSegNo <= recycleSegNo &&
lstat(path, &statbuf) == 0 && S_ISREG(statbuf.st_mode) &&
InstallXLogFileSegment(&endlogSegNo, path,
true, recycleSegNo, true))
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 859ef93..18ede0d 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1114,6 +1114,16 @@ static struct config_bool ConfigureNamesBool[] =
},
{
+ {"wal_recycle", PGC_SUSET, WAL_SETTINGS,
+ gettext_noop("WAL recycling enabled."),
+ NULL
+ },
+ &wal_recycle,
+ true,
+ NULL, NULL, NULL
+ },
+
+ {
{"log_checkpoints", PGC_SIGHUP, LOGGING_WHAT,
gettext_noop("Logs each checkpoint."),
NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 9e39baf..474fd7b 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -199,6 +199,7 @@
#wal_compression = off # enable compression of full-page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
+#wal_recycle = off # do not recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index 421ba6d..cf13f12 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -106,6 +106,7 @@ extern bool EnableHotStandby;
extern bool fullPageWrites;
extern bool wal_log_hints;
extern bool wal_compression;
+extern bool wal_recycle;
extern bool *wal_consistency_checking;
extern char *wal_consistency_checking_string;
extern bool log_checkpoints;
--
2.2.1
On 26.06.18 15:35, Jerry Jelinek wrote:
Attached is a patch to provide an option to disable WAL recycling. We
have found that this can help performance by eliminating
read-modify-write behavior on old WAL files that are no longer resident
in the filesystem cache. The is a lot more detail on the background of
the motivation for this in the following thread.
Your patch describes this feature as a performance feature. We would
need to see more measurements about what this would do on other
platforms and file systems than your particular one. Also, we need to
be careful with user options that trade off reliability for performance
and describe them in much more detail.
If the problem is specifically the file system caching behavior, then we
could also consider using the dreaded posix_fadvise().
Then again, I can understand that turning off WAL recycling is sensible
on ZFS, since there is no point in preallocating space that will never
be used. But then we should also turn off all other preallocation of
WAL files, including the creation of new (non-recycled) ones.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Peter,
Thanks for taking a look a this. I have a few responses in line. I am not a
PG expert, so if there is something here that I've misunderstood, please
let me know.
On Sun, Jul 1, 2018 at 6:54 AM, Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> wrote:
On 26.06.18 15:35, Jerry Jelinek wrote:
Attached is a patch to provide an option to disable WAL recycling. We
have found that this can help performance by eliminating
read-modify-write behavior on old WAL files that are no longer resident
in the filesystem cache. The is a lot more detail on the background of
the motivation for this in the following thread.Your patch describes this feature as a performance feature. We would
need to see more measurements about what this would do on other
platforms and file systems than your particular one. Also, we need to
be careful with user options that trade off reliability for performance
and describe them in much more detail.
I don't think this change really impacts the reliability of PG, since PG
doesn't actually preallocate all of the WAL files. I think PG will allocate
WAL files as it runs, up to the wal_keep_segments limit, at which point it
would start recycling. If the filesystem fills up before that limit is
reached, PG would have to handle the filesystem being full when attempting
to allocate a new WAL file (as it would with my change if WAL recycling is
disabled). Of course once all of the WAL files have finally been allocated,
then PG won't need additional space on a non-COW filesystem. I'd be happy
to add more details to the man page change describing this new option and
the implications if the underlying filesystem fills up.
If the problem is specifically the file system caching behavior, then we
could also consider using the dreaded posix_fadvise().
I'm not sure that solves the problem for non-cached files, which is where
we've observed the performance impact of recycling, where what should be a
write intensive workload turns into a read-modify-write workload because
we're now reading an old WAL file that is many hours, or even days, old and
has thus fallen out of the memory-cached data for the filesystem. The disk
reads still have to happen.
Then again, I can understand that turning off WAL recycling is sensible
on ZFS, since there is no point in preallocating space that will never
be used. But then we should also turn off all other preallocation of
WAL files, including the creation of new (non-recycled) ones.
I don't think we'd see any benefit from that (since the newly allocated
file is certainly cached), and the change would be much more intrusive, so
I opted for the trivial change in the patch I proposed.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Thanks again,
Jerry
On 05.07.18 17:37, Jerry Jelinek wrote:
Your patch describes this feature as a performance feature. We would
need to see more measurements about what this would do on other
platforms and file systems than your particular one. Also, we need to
be careful with user options that trade off reliability for performance
and describe them in much more detail.I don't think this change really impacts the reliability of PG, since PG
doesn't actually preallocate all of the WAL files. I think PG will
allocate WAL files as it runs, up to the wal_keep_segments limit, at
which point it would start recycling. If the filesystem fills up before
that limit is reached, PG would have to handle the filesystem being full
when attempting to allocate a new WAL file (as it would with my change
if WAL recycling is disabled). Of course once all of the WAL files have
finally been allocated, then PG won't need additional space on a non-COW
filesystem. I'd be happy to add more details to the man page change
describing this new option and the implications if the underlying
filesystem fills up.
The point is, the WAL recycling has a purpose, perhaps several. If it
didn't have one, we wouldn't do it. So if we add an option to turn it
off to get performance gains, we have to do some homework.
If the problem is specifically the file system caching behavior, then we
could also consider using the dreaded posix_fadvise().I'm not sure that solves the problem for non-cached files, which is
where we've observed the performance impact of recycling, where what
should be a write intensive workload turns into a read-modify-write
workload because we're now reading an old WAL file that is many hours,
or even days, old and has thus fallen out of the memory-cached data for
the filesystem. The disk reads still have to happen.
But they could happen ahead of time.
Then again, I can understand that turning off WAL recycling is sensible
on ZFS, since there is no point in preallocating space that will never
be used. But then we should also turn off all other preallocation of
WAL files, including the creation of new (non-recycled) ones.I don't think we'd see any benefit from that (since the newly allocated
file is certainly cached), and the change would be much more intrusive,
so I opted for the trivial change in the patch I proposed.
The change would be more invasive, but I think it would ultimately make
the code more clear and maintainable and the user interfaces more
understandable in the long run. I think that would be better than a
slightly ad hoc knob that fixed one particular workload once upon a time.
But we're probably not there yet. We should start with a more detailed
performance analysis of the originally proposed patch.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi,
On 2018-06-26 07:35:57 -0600, Jerry Jelinek wrote:
+ <varlistentry id="guc-wal-recycle" xreflabel="wal_recycle"> + <term><varname>wal_recycle</varname> (<type>boolean</type>) + <indexterm> + <primary><varname>wal_recycle</varname> configuration parameter</primary> + </indexterm> + </term> + <listitem> + <para> + When this parameter is <literal>on</literal>, past log file segments + in the <filename>pg_wal</filename> directory are recycled for future + use. + </para> + + <para> + Turning this parameter off causes past log files segments to be deleted + when no longer needed. This can improve performance by eliminating + read-modify-write operations on old files which are no longer in the + filesystem cache. + </para> + </listitem> + </varlistentry>
This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
performance impact of non COW filesystems, and very likely even negative
impacts in a number of COWed scenarios (when there's enough memory to
keep all WAL files in memory).
I still think that fixing this another way would be preferrable. This'll
be too much of a magic knob that depends on the fs, hardware and
workload.
Greetings,
Andres Freund
On Fri, Jul 6, 2018 at 3:37 AM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:
If the problem is specifically the file system caching behavior, then we
could also consider using the dreaded posix_fadvise().I'm not sure that solves the problem for non-cached files, which is where
we've observed the performance impact of recycling, where what should be a
write intensive workload turns into a read-modify-write workload because
we're now reading an old WAL file that is many hours, or even days, old
and
has thus fallen out of the memory-cached data for the filesystem. The disk
reads still have to happen.
What ZFS record size are you using? PostgreSQL's XLOG_BLCKSZ is usually
8192 bytes. When XLogWrite() calls write(some multiple of XLOG_BLCKSZ), on
a traditional filesystem the kernel will say 'oh, that's overwriting whole
pages exactly, so I have no need to read it from disk' (for example in
FreeBSD ffs_vnops.c ffs_write() see the comment "We must peform a
read-before-write if the transfer size does not cover the entire buffer").
I assume ZFS has a similar optimisation, but it uses much larger records
than the traditional 4096 byte pages, defaulting to 128KB. Is that the
reason for this?
--
Thomas Munro
http://www.enterprisedb.com
Thomas,
We're using a zfs recordsize of 8k to match the PG blocksize of 8k, so what
you're describing is not the issue here.
Thanks,
Jerry
On Thu, Jul 5, 2018 at 3:44 PM, Thomas Munro <thomas.munro@enterprisedb.com>
wrote:
Show quoted text
On Fri, Jul 6, 2018 at 3:37 AM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:If the problem is specifically the file system caching behavior, then we
could also consider using the dreaded posix_fadvise().I'm not sure that solves the problem for non-cached files, which is where
we've observed the performance impact of recycling, where what should bea
write intensive workload turns into a read-modify-write workload because
we're now reading an old WAL file that is many hours, or even days, oldand
has thus fallen out of the memory-cached data for the filesystem. The
disk
reads still have to happen.
What ZFS record size are you using? PostgreSQL's XLOG_BLCKSZ is usually
8192 bytes. When XLogWrite() calls write(some multiple of XLOG_BLCKSZ), on
a traditional filesystem the kernel will say 'oh, that's overwriting whole
pages exactly, so I have no need to read it from disk' (for example in
FreeBSD ffs_vnops.c ffs_write() see the comment "We must peform a
read-before-write if the transfer size does not cover the entire buffer").
I assume ZFS has a similar optimisation, but it uses much larger records
than the traditional 4096 byte pages, defaulting to 128KB. Is that the
reason for this?--
Thomas Munro
http://www.enterprisedb.com
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this new
tunable to clarify when it should be used and the implications. I'm trying
to understand more specifically what else needs to be done next. To
summarize, I think the following general concerns were brought up.
1) Disabling WAL recycling could have a negative performance impact on a
COW filesystem if all WAL files could be kept in the filesystem cache.
2) Disabling WAL recycling reduces reliability, even on COW filesystems.
3) Using something like posix_fadvise to reload recycled WAL files into the
filesystem cache is better even for a COW filesystem.
4) There are "several" other purposes for WAL recycling which this tunable
would impact.
5) A WAL recycling tunable is too specific and a more general solution is
needed.
6) Need more performance data.
For #1, #2 and #3, I don't understand these concerns. It would be helpful
if these could be more specific
For #4, can anybody enumerate these other purposes for WAL recycling?
For #5, perhaps I am making an incorrect assumption about what the original
response was requesting, but I understand that WAL recycling is just one
aspect of WAL file creation/allocation. However, the creation of a new WAL
file is not a problem we've ever observed. In general, any modern
filesystem should do a good job of caching recently accessed files. We've
never observed a problem with the allocation of a new WAL file slightly
before it is needed. The problem we have observed is specifically around
WAL file recycling when we have to access old files that are long gone from
the filesystem cache. The semantics around recycling seem pretty crisp as
compared to some other tunable which would completely change how WAL files
are created. Given that a change like that is also much more intrusive, it
seems better to provide a tunable to disable WAL recycling vs. some other
kind of tunable for which we can't articulate any improvement except in the
recycling scenario.
For #6, there is no feasible way for us to recreate our workload on other
operating systems or filesystems. Can anyone expand on what performance
data is needed?
I'd like to restate the original problem we observed.
When PostgreSQL decides to reuse an old WAL file whose contents have been
evicted from the cache (because they haven't been used in hours), this
turns what should be a workload bottlenecked by synchronous write
performance (that can be well-optimized with an SSD log device) into a
random read workload (that's much more expensive for any system). What's
significantly worse is that we saw this on synchronous standbys. When that
happened, the WAL receiver was blocked on a random read from disk, and
since it's single-threaded, all write queries on the primary stop until the
random read finishes. This is particularly bad for us when the sync is
doing other I/O (e.g., for an autovacuum or a database backup) that causes
disk reads to take hundreds of milliseconds.
To summarize, recycling old WAL files seems like an optimization designed
for certain filesystems that allocate disk blocks up front. Given that the
existing behavior is already filesystem specific, is there specific reasons
why we can't provide a tunable to disable this behavior for filesystems
which don't behave that way?
Thanks again,
Jerry
On Tue, Jun 26, 2018 at 7:35 AM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:
Show quoted text
Hello All,
Attached is a patch to provide an option to disable WAL recycling. We have
found that this can help performance by eliminating read-modify-write
behavior on old WAL files that are no longer resident in the filesystem
cache. The is a lot more detail on the background of the motivation for
this in the following thread./messages/by-id/CACukRjO7DJvub8e2AijOayj8BfKK3
XXBTwu3KKARiTr67M3E3w%40mail.gmail.com#CACukRjO7DJvub8e2AijOayj8BfKK3
XXBTwu3KKARiTr67M3E3w@mail.gmail.comA similar change has been tested against our 9.6 branch that we're
currently running, but the attached patch is against master.Thanks,
Jerry
On 07/10/2018 01:15 PM, Jerry Jelinek wrote:
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this
new tunable to clarify when it should be used and the implications.
I'm trying to understand more specifically what else needs to be done
next. To summarize, I think the following general concerns were
brought up.For #6, there is no feasible way for us to recreate our workload on
other operating systems or filesystems. Can anyone expand on what
performance data is needed?
I think a simple way to prove this would be to run BenchmarkSQL against
PostgreSQL in a default configuration with pg_xlog/pg_wal on a
filesystem that is COW (zfs) and then run another test where
pg_xlog/pg_wal is patched with your patch and new behavior and then run
the test again. BenchmarkSQL is a more thorough benchmarking tool that
something like pg_bench and is very easy to setup.
The reason you would use a default configuration is because it will
cause a huge amount of wal churn, although a test with a proper wal
configuration would also be good.
Thanks,
JD
--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
*** A fault and talent of mine is to tell it exactly how it is. ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
***** Unless otherwise stated, opinions are my own. *****
On 2018-Jul-10, Jerry Jelinek wrote:
2) Disabling WAL recycling reduces reliability, even on COW filesystems.
I think the problem here is that WAL recycling in normal filesystems
helps protect the case where filesystem gets full. If you remove it,
that protection goes out the window. You can claim that people needs to
make sure to have available disk space, but this does become a problem
in practice. I think the thing to do is verify what happens with
recycling off when the disk gets full; is it possible to recover
afterwards? Is there any corrupt data? What happens if the disk gets
full just as the new WAL file is being created -- is there a Postgres
PANIC or something? As I understand, with recycling on it is easy (?)
to recover, there is no PANIC crash, and no data corruption results.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jul 11, 2018 at 8:25 AM, Joshua D. Drake <jd@commandprompt.com> wrote:
On 07/10/2018 01:15 PM, Jerry Jelinek wrote:
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this new
tunable to clarify when it should be used and the implications. I'm trying
to understand more specifically what else needs to be done next. To
summarize, I think the following general concerns were brought up.For #6, there is no feasible way for us to recreate our workload on other
operating systems or filesystems. Can anyone expand on what performance data
is needed?I think a simple way to prove this would be to run BenchmarkSQL against
PostgreSQL in a default configuration with pg_xlog/pg_wal on a filesystem
that is COW (zfs) and then run another test where pg_xlog/pg_wal is patched
with your patch and new behavior and then run the test again. BenchmarkSQL
is a more thorough benchmarking tool that something like pg_bench and is
very easy to setup.
I have a lowly but trusty HP Microserver running FreeBSD 11.2 with ZFS
on spinning rust. It occurred to me that such an anaemic machine
might show this effect easily because its cold reads are as slow as a
Lada full of elephants going uphill. Let's see...
# os setup
sysctl vfs.zfs.arc_min=134217728
sysctl vfs.zfs.arc_max=134217728
zfs create zoot/data/test
zfs set mountpoint=/data/test zroot/data/test
zfs set compression=off zroot/data/test
zfs set recordsize=8192 zroot/data/test
# initdb into /data/test/pgdata, then set postgresql.conf up like this:
fsync=off
max_wal_size = 600MB
min_wal_size = 600MB
# small scale test, we're only interested in producing WAL, not db size
pgbench -i -s 100 postgres
# do this a few times first, to make sure we have lots of WAL segments
pgbench -M prepared -c 4 -j 4 -T 60 postgres
# now test...
With wal_recycle=on I reliably get around 1100TPS and vmstat -w 10
shows numbers like this:
procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr ad0 ad1 in sy cs us sy id
3 0 3 1.2G 3.1G 4496 0 0 0 52 76 144 138 607 84107 29713 55 17 28
4 0 3 1.2G 3.1G 2955 0 0 0 84 77 134 130 609 82942 34324 61 17 22
4 0 3 1.2G 3.1G 2327 0 0 0 0 77 114 125 454 83157 29638 68 15 18
5 0 3 1.2G 3.1G 1966 0 0 0 82 77 86 81 335 84480 25077 74 13 12
3 0 3 1.2G 3.1G 1793 0 0 0 533 74 72 68 310 127890 31370 77 16 7
4 0 3 1.2G 3.1G 1113 0 0 0 151 73 95 94 363 128302 29827 74 18 8
With wal_recycle=off I reliably get around 1600TPS and vmstat -w 10
shows numbers like this:
procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr ad0 ad1 in sy cs us sy id
0 0 3 1.2G 3.1G 148 0 0 0 402 71 38 38 153 16668 5656 10 3 87
5 0 3 1.2G 3.1G 4527 0 0 0 50 73 28 27 123 123986 23373 68 15 17
5 0 3 1.2G 3.1G 3036 0 0 0 151 73 47 49 181 148014 29412 83 16 0
4 0 3 1.2G 3.1G 2063 0 0 0 233 73 56 54 200 143018 28699 81 17 2
4 0 3 1.2G 3.1G 1202 0 0 0 95 73 48 49 189 147276 29196 81 18 1
4 0 3 1.2G 3.1G 732 0 0 0 0 73 56 55 207 146805 29265 82 17 1
I don't have time to investigate further for now and my knowledge of
ZFS is superficial, but the patch seems to have a clear beneficial
effect, reducing disk IOs and page faults on my little storage box.
Obviously this isn't representative of a proper server environment, or
some other OS, but it's a clue. That surprised me... I was quietly
hoping it was hoping it was going to be 'oh, if you turn off
compression and use 8kb it doesn't happen because the pages line up'.
But nope.
--
Thomas Munro
http://www.enterprisedb.com
Alvaro,
I'll perform several test runs with various combinations and post the
results.
Thanks,
Jerry
On Tue, Jul 10, 2018 at 2:34 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:
Show quoted text
On 2018-Jul-10, Jerry Jelinek wrote:
2) Disabling WAL recycling reduces reliability, even on COW filesystems.
I think the problem here is that WAL recycling in normal filesystems
helps protect the case where filesystem gets full. If you remove it,
that protection goes out the window. You can claim that people needs to
make sure to have available disk space, but this does become a problem
in practice. I think the thing to do is verify what happens with
recycling off when the disk gets full; is it possible to recover
afterwards? Is there any corrupt data? What happens if the disk gets
full just as the new WAL file is being created -- is there a Postgres
PANIC or something? As I understand, with recycling on it is easy (?)
to recover, there is no PANIC crash, and no data corruption results.--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Jul 10, 2018 at 1:34 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:
On 2018-Jul-10, Jerry Jelinek wrote:
2) Disabling WAL recycling reduces reliability, even on COW filesystems.
I think the problem here is that WAL recycling in normal filesystems
helps protect the case where filesystem gets full. If you remove it,
that protection goes out the window. You can claim that people needs to
make sure to have available disk space, but this does become a problem
in practice. I think the thing to do is verify what happens with
recycling off when the disk gets full; is it possible to recover
afterwards? Is there any corrupt data? What happens if the disk gets
full just as the new WAL file is being created -- is there a Postgres
PANIC or something? As I understand, with recycling on it is easy (?)
to recover, there is no PANIC crash, and no data corruption results.
If the result of hitting ENOSPC when creating or writing to a WAL file was
that the database could become corrupted, then wouldn't that risk already
be present (a) on any system, for the whole period from database init until
the maximum number of WAL files was created, and (b) all the time on any
copy-on-write filesystem?
Thanks,
Dave
Hi,
On 2018-07-10 14:15:30 -0600, Jerry Jelinek wrote:
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this new
tunable to clarify when it should be used and the implications. I'm trying
to understand more specifically what else needs to be done next. To
summarize, I think the following general concerns were brought up.1) Disabling WAL recycling could have a negative performance impact on a
COW filesystem if all WAL files could be kept in the filesystem cache.
For #1, #2 and #3, I don't understand these concerns. It would be helpful
if these could be more specific
We perform more writes (new files are zeroed, which needs to be
fsynced), and increase metadata traffic (creation of files), when not
recycling.
Regards,
Andres
On Tue, Jul 10, 2018 at 10:32 PM, Thomas Munro <
thomas.munro@enterprisedb.com> wrote:
On Wed, Jul 11, 2018 at 8:25 AM, Joshua D. Drake <jd@commandprompt.com>
wrote:On 07/10/2018 01:15 PM, Jerry Jelinek wrote:
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this new
tunable to clarify when it should be used and the implications. I'mtrying
to understand more specifically what else needs to be done next. To
summarize, I think the following general concerns were brought up.For #6, there is no feasible way for us to recreate our workload on
other
operating systems or filesystems. Can anyone expand on what performance
data
is needed?
I think a simple way to prove this would be to run BenchmarkSQL against
PostgreSQL in a default configuration with pg_xlog/pg_wal on a filesystem
that is COW (zfs) and then run another test where pg_xlog/pg_wal ispatched
with your patch and new behavior and then run the test again.
BenchmarkSQL
is a more thorough benchmarking tool that something like pg_bench and is
very easy to setup.I have a lowly but trusty HP Microserver running FreeBSD 11.2 with ZFS
on spinning rust. It occurred to me that such an anaemic machine
might show this effect easily because its cold reads are as slow as a
Lada full of elephants going uphill. Let's see...# os setup
sysctl vfs.zfs.arc_min=134217728
sysctl vfs.zfs.arc_max=134217728
zfs create zoot/data/test
zfs set mountpoint=/data/test zroot/data/test
zfs set compression=off zroot/data/test
zfs set recordsize=8192 zroot/data/test# initdb into /data/test/pgdata, then set postgresql.conf up like this:
fsync=off
max_wal_size = 600MB
min_wal_size = 600MB# small scale test, we're only interested in producing WAL, not db size
pgbench -i -s 100 postgres# do this a few times first, to make sure we have lots of WAL segments
pgbench -M prepared -c 4 -j 4 -T 60 postgres# now test...
With wal_recycle=on I reliably get around 1100TPS and vmstat -w 10
shows numbers like this:procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr ad0 ad1 in sy cs us
sy id
3 0 3 1.2G 3.1G 4496 0 0 0 52 76 144 138 607 84107 29713 55
17 28
4 0 3 1.2G 3.1G 2955 0 0 0 84 77 134 130 609 82942 34324 61
17 22
4 0 3 1.2G 3.1G 2327 0 0 0 0 77 114 125 454 83157 29638 68
15 18
5 0 3 1.2G 3.1G 1966 0 0 0 82 77 86 81 335 84480 25077 74
13 12
3 0 3 1.2G 3.1G 1793 0 0 0 533 74 72 68 310 127890 31370 77
16 7
4 0 3 1.2G 3.1G 1113 0 0 0 151 73 95 94 363 128302 29827 74
18 8With wal_recycle=off I reliably get around 1600TPS and vmstat -w 10
shows numbers like this:procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr ad0 ad1 in sy cs us
sy id
0 0 3 1.2G 3.1G 148 0 0 0 402 71 38 38 153 16668 5656 10
3 87
5 0 3 1.2G 3.1G 4527 0 0 0 50 73 28 27 123 123986 23373 68
15 17
5 0 3 1.2G 3.1G 3036 0 0 0 151 73 47 49 181 148014 29412 83
16 0
4 0 3 1.2G 3.1G 2063 0 0 0 233 73 56 54 200 143018 28699 81
17 2
4 0 3 1.2G 3.1G 1202 0 0 0 95 73 48 49 189 147276 29196 81
18 1
4 0 3 1.2G 3.1G 732 0 0 0 0 73 56 55 207 146805 29265 82
17 1I don't have time to investigate further for now and my knowledge of
ZFS is superficial, but the patch seems to have a clear beneficial
effect, reducing disk IOs and page faults on my little storage box.
Obviously this isn't representative of a proper server environment, or
some other OS, but it's a clue. That surprised me... I was quietly
hoping it was hoping it was going to be 'oh, if you turn off
compression and use 8kb it doesn't happen because the pages line up'.
But nope.--
Thomas Munro
http://www.enterprisedb.com
Hi Thomas,
Thanks for testing! It's validating that you saw the same results.
-- Dave
On 07/12/2018 02:25 AM, David Pacheco wrote:
On Tue, Jul 10, 2018 at 1:34 PM, Alvaro Herrera
<alvherre@2ndquadrant.com <mailto:alvherre@2ndquadrant.com>> wrote:On 2018-Jul-10, Jerry Jelinek wrote:
2) Disabling WAL recycling reduces reliability, even on COW filesystems.
I think the problem here is that WAL recycling in normal filesystems
helps protect the case where filesystem gets full. If you remove it,
that protection goes out the window. You can claim that people needs to
make sure to have available disk space, but this does become a problem
in practice. I think the thing to do is verify what happens with
recycling off when the disk gets full; is it possible to recover
afterwards? Is there any corrupt data? What happens if the disk gets
full just as the new WAL file is being created -- is there a Postgres
PANIC or something? As I understand, with recycling on it is easy (?)
to recover, there is no PANIC crash, and no data corruption results.If the result of hitting ENOSPC when creating or writing to a WAL file
was that the database could become corrupted, then wouldn't that risk
already be present (a) on any system, for the whole period from database
init until the maximum number of WAL files was created, and (b) all the
time on any copy-on-write filesystem?
I don't follow Alvaro's reasoning, TBH. There's a couple of things that
confuse me ...
I don't quite see how reusing WAL segments actually protects against
full filesystem? On "traditional" filesystems I would not expect any
difference between "unlink+create" and reusing an existing file. On CoW
filesystems (like ZFS or btrfs) the space management works very
differently and reusing an existing file is unlikely to save anything.
But even if it reduces the likelihood of ENOSPC, it does not eliminate
it entirely. max_wal_size is not a hard limit, and the disk may be
filled by something else (when WAL is not on a separate device, when
there is think provisioning, etc.). So it's not a protection against
data corruption we could rely on. (And as was discussed in the recent
fsync thread, ENOSPC is a likely source of past data corruption issues
on NFS and possibly other filesystems.)
I might be missing something, of course.
AFAICS the original reason for reusing WAL segments was the belief that
overwriting an existing file is faster than writing a new file. That
might have been true in the past, but the question is if it's still true
on current filesystems. The results posted here suggest it's not true on
ZFS, at least.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I was asked to perform two different tests:
1) A benchmarksql run with WAL recycling on and then off, for comparison
2) A test when the filesystem fills up
For #1, I did two 15 minute benchmarksql runs and here are the results.
wal_recycle=on
--------------
Term-00, Running Average tpmTOTAL: 299.84 Current tpmTOTAL: 29412
Memory U14:49:02,470 [Thread-1] INFO jTPCC : Term-00,
14:49:02,470 [Thread-1] INFO jTPCC : Term-00,
14:49:02,471 [Thread-1] INFO jTPCC : Term-00, Measured tpmC (NewOrders) =
136.49
14:49:02,471 [Thread-1] INFO jTPCC : Term-00, Measured tpmTOTAL = 299.78
14:49:02,471 [Thread-1] INFO jTPCC : Term-00, Session Start =
2018-07-12 14:34:02
14:49:02,471 [Thread-1] INFO jTPCC : Term-00, Session End =
2018-07-12 14:49:02
14:49:02,471 [Thread-1] INFO jTPCC : Term-00, Transaction Count = 4497
wal_recycle=off
---------------
Term-00, Running Average tpmTOTAL: 299.85 Current tpmTOTAL: 29520
Memory U15:10:15,712 [Thread-1] INFO jTPCC : Term-00,
15:10:15,712 [Thread-1] INFO jTPCC : Term-00,
15:10:15,713 [Thread-1] INFO jTPCC : Term-00, Measured tpmC (NewOrders) =
135.89
15:10:15,713 [Thread-1] INFO jTPCC : Term-00, Measured tpmTOTAL = 299.79
15:10:15,713 [Thread-1] INFO jTPCC : Term-00, Session Start =
2018-07-12 14:55:15
15:10:15,713 [Thread-1] INFO jTPCC : Term-00, Session End =
2018-07-12 15:10:15
15:10:15,713 [Thread-1] INFO jTPCC : Term-00, Transaction Count = 4497
As can be seen, disabling WAL recycling does not cause any performance
regression.
For #2, I ran the test with WAL recycling on (the current behavior as well
as the default with this patch) since the behavior of postgres is
orthogonal to WAL recycling when the filesystem fills up.
I capped the filesystem with 32MB of free space. I setup a configuration
with wal_keep_segments=50 and started a long benchmarksql run. I had 4 WAL
files already in existence when the run started.
As the filesystem fills up, the performance of postgres gets slower and
slower, as would be expected. This is due to the COW nature of the
filesystem and the fact that all writes need to find space.
When a new WAL file is created, this essentially consumes no space since it
is a zero-filled file, so no filesystem space is consumed, except for a
little metadata for the file. However, as writes occur to the WAL
file, space is being consumed. Eventually all space in the filesystem is
consumed. I could not tell if this occurred during a write to an existing
WAL file or a write to the database itself. As other people have observed,
WAL file creation in a COW filesystem is not the problematic operation when
the filesystem fills up. It is the writes to existing files that will fail.
When postgres core dumped there were 6 WAL files in the pg_wal directory
(well short of the 50 configured).
When the filesystem filled up, postgres core dumped and benchmarksql
emitted a bunch of java debug information which I could provide if anyone
is interested.
Here is some information for the postgres core dump. It looks like postgres
aborted itself, but since the filesystem is full, there is nothing in the
log file.
::status
debugging core file of postgres (64-bit) from
f6c22f98-38aa-eb51-80d2-811ed25bed6b
file: /zones/f6c22f98-38aa-eb51-80d2-811ed25bed6b/local/pgsql/bin/postgres
initial argv: /usr/local/pgsql/bin/postgres -D /home/postgres/data
threading model: native threads
status: process terminated by SIGABRT (Abort), pid=76019 uid=1001 code=-1
$C
fffff9ffffdfa4b0 libc.so.1`_lwp_kill+0xa()
fffff9ffffdfa4e0 libc.so.1`raise+0x20(6)
fffff9ffffdfa530 libc.so.1`abort+0x98()
fffff9ffffdfa560 errfinish+0x230()
fffff9ffffdfa5e0 XLogWrite+0x294()
fffff9ffffdfa610 XLogBackgroundFlush+0x18d()
fffff9ffffdfaa50 WalWriterMain+0x1a8()
fffff9ffffdfaab0 AuxiliaryProcessMain+0x3ff()
fffff9ffffdfab40 0x7b5566()
fffff9ffffdfab90 reaper+0x60a()
fffff9ffffdfaba0 libc.so.1`__sighndlr+6()
fffff9ffffdfac30 libc.so.1`call_user_handler+0x1db(12, 0, fffff9ffffdfaca0)
fffff9ffffdfac80 libc.so.1`sigacthandler+0x116(12, 0, fffff9ffffdfaca0)
fffff9ffffdfb0f0 libc.so.1`__pollsys+0xa()
fffff9ffffdfb220 libc.so.1`pselect+0x26b(7, fffff9ffffdfdad0, 0, 0,
fffff9ffffdfb230, 0)
fffff9ffffdfb270 libc.so.1`select+0x5a(7, fffff9ffffdfdad0, 0, 0,
fffff9ffffdfb6c0)
fffff9ffffdffb00 ServerLoop+0x289()
fffff9ffffdffb70 PostmasterMain+0xcfa()
fffff9ffffdffba0 main+0x3cd()
fffff9ffffdffbd0 _start_crt+0x83()
fffff9ffffdffbe0 _start+0x18()
Let me know if there is any other information I could provide.
Thanks,
Jerry
On Tue, Jun 26, 2018 at 7:35 AM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:
Show quoted text
Hello All,
Attached is a patch to provide an option to disable WAL recycling. We have
found that this can help performance by eliminating read-modify-write
behavior on old WAL files that are no longer resident in the filesystem
cache. The is a lot more detail on the background of the motivation for
this in the following thread./messages/by-id/CACukRjO7DJvub8e2AijOayj8BfKK3
XXBTwu3KKARiTr67M3E3w%40mail.gmail.com#CACukRjO7DJvub8e2AijOayj8BfKK3
XXBTwu3KKARiTr67M3E3w@mail.gmail.comA similar change has been tested against our 9.6 branch that we're
currently running, but the attached patch is against master.Thanks,
Jerry
On Thu, Jul 12, 2018 at 10:52 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
I don't follow Alvaro's reasoning, TBH. There's a couple of things that
confuse me ...I don't quite see how reusing WAL segments actually protects against full
filesystem? On "traditional" filesystems I would not expect any difference
between "unlink+create" and reusing an existing file. On CoW filesystems
(like ZFS or btrfs) the space management works very differently and reusing
an existing file is unlikely to save anything.
Yeah, I had the same thoughts.
But even if it reduces the likelihood of ENOSPC, it does not eliminate it
entirely. max_wal_size is not a hard limit, and the disk may be filled by
something else (when WAL is not on a separate device, when there is think
provisioning, etc.). So it's not a protection against data corruption we
could rely on. (And as was discussed in the recent fsync thread, ENOSPC is a
likely source of past data corruption issues on NFS and possibly other
filesystems.)
Right. That ENOSPC discussion was about checkpointing though, not
WAL. IIUC the hypothesis was that there may be stacks (possibly
involving NFS or thin provisioning, or perhaps historical versions of
certain local filesystems that had reservation accounting bugs, on a
certain kernel) that could let you write() a buffer, and then later
when the checkpointer calls fsync() the filesystem says ENOSPC, the
kernel reports that and throws away the dirty page, and then at next
checkpoint fsync() succeeds but the checkpoint is a lie and the data
is smoke.
We already PANIC on any errno except EINTR in XLogWriteLog(), as seen
in Jerry's nearby stack trace, so that failure mode seems to be
covered already for WAL, no?
AFAICS the original reason for reusing WAL segments was the belief that
overwriting an existing file is faster than writing a new file. That might
have been true in the past, but the question is if it's still true on
current filesystems. The results posted here suggest it's not true on ZFS,
at least.
Yeah.
The wal_recycle=on|off patch seems reasonable to me (modulo Andres's
comments about the documentation; we should make sure that the 'off'
setting isn't accidentally recommended to the wrong audience) and I
vote we take it.
Just by the way, if I'm not mistaken ZFS does avoid faulting when
overwriting whole blocks, just like other filesystems:
So then where are those faults coming from? Perhaps the tree page
that holds the block pointer, of which there must be many when the
recordsize is small?
--
Thomas Munro
http://www.enterprisedb.com
Thanks to everyone who has taken the time to look at this patch and provide
all of the feedback.
I'm going to wait another day to see if there are any more comments. If
not, then first thing next week, I will send out a revised patch with
improvements to the man page change as requested. If anyone has specific
things they want to be sure are covered, please just let me know.
Thanks again,
Jerry
On Thu, Jul 12, 2018 at 7:06 PM, Thomas Munro <thomas.munro@enterprisedb.com
Show quoted text
wrote:
On Thu, Jul 12, 2018 at 10:52 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:I don't follow Alvaro's reasoning, TBH. There's a couple of things that
confuse me ...I don't quite see how reusing WAL segments actually protects against full
filesystem? On "traditional" filesystems I would not expect anydifference
between "unlink+create" and reusing an existing file. On CoW filesystems
(like ZFS or btrfs) the space management works very differently andreusing
an existing file is unlikely to save anything.
Yeah, I had the same thoughts.
But even if it reduces the likelihood of ENOSPC, it does not eliminate it
entirely. max_wal_size is not a hard limit, and the disk may be filled by
something else (when WAL is not on a separate device, when there is think
provisioning, etc.). So it's not a protection against data corruption we
could rely on. (And as was discussed in the recent fsync thread, ENOSPCis a
likely source of past data corruption issues on NFS and possibly other
filesystems.)Right. That ENOSPC discussion was about checkpointing though, not
WAL. IIUC the hypothesis was that there may be stacks (possibly
involving NFS or thin provisioning, or perhaps historical versions of
certain local filesystems that had reservation accounting bugs, on a
certain kernel) that could let you write() a buffer, and then later
when the checkpointer calls fsync() the filesystem says ENOSPC, the
kernel reports that and throws away the dirty page, and then at next
checkpoint fsync() succeeds but the checkpoint is a lie and the data
is smoke.We already PANIC on any errno except EINTR in XLogWriteLog(), as seen
in Jerry's nearby stack trace, so that failure mode seems to be
covered already for WAL, no?AFAICS the original reason for reusing WAL segments was the belief that
overwriting an existing file is faster than writing a new file. Thatmight
have been true in the past, but the question is if it's still true on
current filesystems. The results posted here suggest it's not true onZFS,
at least.
Yeah.
The wal_recycle=on|off patch seems reasonable to me (modulo Andres's
comments about the documentation; we should make sure that the 'off'
setting isn't accidentally recommended to the wrong audience) and I
vote we take it.Just by the way, if I'm not mistaken ZFS does avoid faulting when
overwriting whole blocks, just like other filesystems:https://github.com/freebsd/freebsd/blob/master/sys/cddl/
contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c#L1034So then where are those faults coming from? Perhaps the tree page
that holds the block pointer, of which there must be many when the
recordsize is small?--
Thomas Munro
http://www.enterprisedb.com
On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund <andres@anarazel.de> wrote:
This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
performance impact of non COW filesystems, and very likely even negative
impacts in a number of COWed scenarios (when there's enough memory to
keep all WAL files in memory).I still think that fixing this another way would be preferrable. This'll
be too much of a magic knob that depends on the fs, hardware and
workload.
I tend to agree with you, but unless we have a pretty good idea what
that other way would be, I think we should probably accept the patch.
Could we somehow make this self-tuning? On any given
filesystem/hardware/workload, either creating a new 16MB file is
faster, or recycling an old file is faster. If the old file is still
cached, recycling it figures to win on almost any hardware. If not,
it seems like something of a toss-up. I suppose we could try to keep
a running average of how long it is taking us to recycle WAL files and
how long it is taking us to create new ones; if we do each one of
those things at least sometimes, then we'll eventually get an idea of
which one is quicker. But it's not clear to me that such data would
be very reliable unless we tried to make sure that we tried both
things fairly regularly under circumstances where we could have chosen
to do the other one.
I think part of the problem here is that whether a WAL segment is
likely to be cached depends on a host of factors which we don't track
very carefully, like whether it's been streamed or decoded recently.
If we knew when that a particular WAL segment hadn't been accessed for
any purpose in 10+ minutes, it would probably be fairly safe to guess
that it's no longer in cache; if we knew that it had been accessed <15
seconds ago, that it is probably still in cache. But we have no idea.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund <andres@anarazel.de> wrote:
This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
performance impact of non COW filesystems, and very likely even negative
impacts in a number of COWed scenarios (when there's enough memory to
keep all WAL files in memory).I still think that fixing this another way would be preferrable. This'll
be too much of a magic knob that depends on the fs, hardware and
workload.
I tend to agree with you, but unless we have a pretty good idea what
that other way would be, I think we should probably accept the patch.
Could we somehow make this self-tuning? On any given
filesystem/hardware/workload, either creating a new 16MB file is
faster, or recycling an old file is faster.
That's not the way to think about it. On a COW file system, we don't
want to "create 16MB files" at all --- we should just fill WAL files
on-the-fly, because the pre-fill activity isn't actually serving the
intended purpose of reserving disk space. It's just completely useless
overhead :-(. So we can't really make a direct comparison between the
two approaches; there's no good way to net out the cost of constructing
the WAL data we need to write.
Moreover, a raw speed comparison isn't the whole story; a DBA might
choose write-without-prefill because it's faster for him, even though
he's taking a bigger chance of trouble on out-of-disk-space.
I think that the right basic idea is to have a GUC that chooses between
the two implementations, but whether it can be set automatically is not
clear to me. Can initdb perhaps investigate what kind of filesystem the
WAL directory is sitting on, and set the default value from hard-wired
knowledge about that?
regards, tom lane
Greetings,
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
I think that the right basic idea is to have a GUC that chooses between
the two implementations, but whether it can be set automatically is not
clear to me. Can initdb perhaps investigate what kind of filesystem the
WAL directory is sitting on, and set the default value from hard-wired
knowledge about that?
Maybe.. but I think we'd still need a way to change it because people
often start with their database system minimally configured (including
having WAL in the default location of the data directory) and only later
realize that was a bad idea and change it later. I wouldn't be at all
surprised if that "change it later" meant moving it to a different
filesystem, and having to re-initdb to take advantage of that would be
particularly unfriendly.
Thanks!
Stephen
On 07/16/2018 04:54 AM, Stephen Frost wrote:
Greetings,
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
I think that the right basic idea is to have a GUC that chooses between
the two implementations, but whether it can be set automatically is not
clear to me. Can initdb perhaps investigate what kind of filesystem the
WAL directory is sitting on, and set the default value from hard-wired
knowledge about that?Maybe.. but I think we'd still need a way to change it because people
often start with their database system minimally configured (including
having WAL in the default location of the data directory) and only later
realize that was a bad idea and change it later. I wouldn't be at all
surprised if that "change it later" meant moving it to a different
filesystem, and having to re-initdb to take advantage of that would be
particularly unfriendly.
I'm not sure the detection can be made sufficiently reliable for initdb.
For example, it's not that uncommon to do initdb and then move the WAL
to a different filesystem using symlink. Also, I wonder how placing the
filesystem on LVM with snapshotting (which kinda makes it CoW) affects
the system behavior.
But maybe those are not issues, as long as the result is predictable.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2018-07-15 20:32:39 -0400, Robert Haas wrote:
On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund <andres@anarazel.de> wrote:
This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
performance impact of non COW filesystems, and very likely even negative
impacts in a number of COWed scenarios (when there's enough memory to
keep all WAL files in memory).I still think that fixing this another way would be preferrable. This'll
be too much of a magic knob that depends on the fs, hardware and
workload.I tend to agree with you, but unless we have a pretty good idea what
that other way would be, I think we should probably accept the patch.
I don't think I've argued against that - I just want there to be
sufficient caveats to make clear it's going to hurt on very common OS &
FS combinations.
I think part of the problem here is that whether a WAL segment is
likely to be cached depends on a host of factors which we don't track
very carefully, like whether it's been streamed or decoded recently.
If we knew when that a particular WAL segment hadn't been accessed for
any purpose in 10+ minutes, it would probably be fairly safe to guess
that it's no longer in cache; if we knew that it had been accessed <15
seconds ago, that it is probably still in cache. But we have no idea.
True. Additionally we don't know whether, even if cold cache,
re-initializing files isn't worse performance-wise than recycling files.
Greetings,
Andres Freund
Hi,
On 2018-07-15 20:55:38 -0400, Tom Lane wrote:
That's not the way to think about it. On a COW file system, we don't
want to "create 16MB files" at all --- we should just fill WAL files
on-the-fly, because the pre-fill activity isn't actually serving the
intended purpose of reserving disk space. It's just completely useless
overhead :-(. So we can't really make a direct comparison between the
two approaches; there's no good way to net out the cost of constructing
the WAL data we need to write.
We probably should still allocate them in 16MB segments. We rely on the
size being fixed in a number of places. But it's probably worthwhile to
just do a posix_fadvise or such. Also, if we continually increase the
size with each write we end up doing a lot more metadata transactions,
which'll essentially serve to increase journalling overhead further.
Greetings,
Andres Freund
Andres Freund <andres@anarazel.de> writes:
On 2018-07-15 20:55:38 -0400, Tom Lane wrote:
That's not the way to think about it. On a COW file system, we don't
want to "create 16MB files" at all --- we should just fill WAL files
on-the-fly, because the pre-fill activity isn't actually serving the
intended purpose of reserving disk space. It's just completely useless
overhead :-(. So we can't really make a direct comparison between the
two approaches; there's no good way to net out the cost of constructing
the WAL data we need to write.
We probably should still allocate them in 16MB segments. We rely on the
size being fixed in a number of places.
Reasonable point. I was supposing that it'd be okay if a partially
written segment were shorter than 16MB, but you're right that that
would require vetting a lot of code to be sure about it.
But it's probably worthwhile to
just do a posix_fadvise or such. Also, if we continually increase the
size with each write we end up doing a lot more metadata transactions,
which'll essentially serve to increase journalling overhead further.
Hm. What you're claiming is that on these FSen, extending a file involves
more/different metadata activity than allocating new space for a COW
overwrite of an existing area within the file. Is that really true?
The former case would be far more common in typical usage, and somehow
I doubt the FS authors would have been too stupid to optimize things so
that the same journal entry can record both the space allocation and the
logical-EOF change.
But anyway, this means we have two nearly independent issues to
investigate: whether recycling/renaming old files is cheaper than
constantly creating and deleting them, and whether to use physical
file zeroing versus some "just set the EOF please" filesystem call
when first creating a file. The former does seem like it's purely
a performance question, but the latter involves a tradeoff of
performance against an ENOSPC-panic protection feature that in
reality only works on some filesystems.
regards, tom lane
On Mon, Jul 16, 2018 at 10:12 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
But anyway, this means we have two nearly independent issues to
investigate: whether recycling/renaming old files is cheaper than
constantly creating and deleting them, and whether to use physical
file zeroing versus some "just set the EOF please" filesystem call
when first creating a file. The former does seem like it's purely
a performance question, but the latter involves a tradeoff of
performance against an ENOSPC-panic protection feature that in
reality only works on some filesystems.
It's been a few years since I tested this, but my recollection is that
if you fill up pg_xlog, the system will PANIC and die on a vanilla
Linux install. Sure, you can set max_wal_size, but that's a soft
limit, not a hard limit, and if you generate WAL faster than the
system can checkpoint, you can overrun that value and force allocation
of additional WAL files. So I'm not sure we have any working
ENOSPC-panic protection today. Given that, I'm doubtful that we
should prioritize maintaining whatever partially-working protection we
may have today over raw performance. If we want to fix ENOSPC on
pg_wal = PANIC, and I think that would be a good thing to fix, then we
should do it either by finding a way to make the WAL insertion ERROR
out instead of panicking, or throttle WAL generation as we get close
to disk space exhaustion so that the checkpoint has time to complete,
as previously proposed by Heroku.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
There have been quite a few comments since last week, so at this point I am
uncertain how to proceed with this change. I don't think I saw anything
concrete in the recent emails that I can act upon.
I would like to respond to the comment about trying to "self-tune" the
behavior based on inferences made about caching during setup. I can't speak
for many other filesystems, but for ZFS, the ARC size is not fixed and will
vary based on the memory demands against the machine. Also, what files are
cached will vary based upon the workloads running on the machine. Thus, I
do not think there is a valid way to make inferences about future caching
behavior based upon a point-in-time observation.
I am still happy to update the man pages to explain the new tunable better
if that is acceptable.
Thanks,
Jerry
On Sun, Jul 15, 2018 at 6:32 PM, Robert Haas <robertmhaas@gmail.com> wrote:
Show quoted text
On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund <andres@anarazel.de> wrote:
This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
performance impact of non COW filesystems, and very likely even negative
impacts in a number of COWed scenarios (when there's enough memory to
keep all WAL files in memory).I still think that fixing this another way would be preferrable. This'll
be too much of a magic knob that depends on the fs, hardware and
workload.I tend to agree with you, but unless we have a pretty good idea what
that other way would be, I think we should probably accept the patch.Could we somehow make this self-tuning? On any given
filesystem/hardware/workload, either creating a new 16MB file is
faster, or recycling an old file is faster. If the old file is still
cached, recycling it figures to win on almost any hardware. If not,
it seems like something of a toss-up. I suppose we could try to keep
a running average of how long it is taking us to recycle WAL files and
how long it is taking us to create new ones; if we do each one of
those things at least sometimes, then we'll eventually get an idea of
which one is quicker. But it's not clear to me that such data would
be very reliable unless we tried to make sure that we tried both
things fairly regularly under circumstances where we could have chosen
to do the other one.I think part of the problem here is that whether a WAL segment is
likely to be cached depends on a host of factors which we don't track
very carefully, like whether it's been streamed or decoded recently.
If we knew when that a particular WAL segment hadn't been accessed for
any purpose in 10+ minutes, it would probably be fairly safe to guess
that it's no longer in cache; if we knew that it had been accessed <15
seconds ago, that it is probably still in cache. But we have no idea.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Mon, Jul 16, 2018 at 10:38:14AM -0400, Robert Haas wrote:
It's been a few years since I tested this, but my recollection is that
if you fill up pg_xlog, the system will PANIC and die on a vanilla
Linux install. Sure, you can set max_wal_size, but that's a soft
limit, not a hard limit, and if you generate WAL faster than the
system can checkpoint, you can overrun that value and force allocation
of additional WAL files. So I'm not sure we have any working
ENOSPC-panic protection today. Given that, I'm doubtful that we
should prioritize maintaining whatever partially-working protection we
may have today over raw performance. If we want to fix ENOSPC on
pg_wal = PANIC, and I think that would be a good thing to fix, then we
should do it either by finding a way to make the WAL insertion ERROR
out instead of panicking, or throttle WAL generation as we get close
to disk space exhaustion so that the checkpoint has time to complete,
as previously proposed by Heroku.
I would personally prefer seeing max_wal_size being switched to a hard
limit, and make that tunable. I am wondering if that's the case for
other people on this list, but I see from time to time, every couple of
weeks, people complaining that Postgres is not able to maintain a hard
guarantee behind the value of max_wal_size. In some upgrade scenarios,
I had to tell such folks to throttle their insert load and also manually
issue checkpoints to allow the system to stay up and continue with the
upgrade process. So there are definitely cases where throttling is
useful, and if the hard limit is reached for some cases I would rather
see WAL generation from other backends simply stopped instead of risking
the system to go down so as the system can finish its checkpoint. And
sometimes this happens also with a SQL dump, where throttling the load
at the application level means more complex dump strategy so as things
are split between multiple files for example.
--
Michael
On 17.07.18 00:04, Jerry Jelinek wrote:
There have been quite a few comments since last week, so at this point I
am uncertain how to proceed with this change. I don't think I saw
anything concrete in the recent emails that I can act upon.
The outcome of this could be multiple orthogonal patches that affect the
WAL file allocation behavior somehow. I think your original idea of
skipping recycling on a COW file system is sound. But I would rather
frame the option as "preallocating files is obviously useless on a COW
file system" rather than "this will make things mysteriously faster with
uncertain trade-offs".
The actual implementation could use another round of consideration. I
wonder how this should interact with min_wal_size. Wouldn't
min_wal_size = 0 already do what we need (if you could set it to 0,
which is currently not possible)? Should the new setting be something
like min_wal_size = -1? Or even if it's a new setting, it might be
better to act on it in XLOGfileslop(), so these things are kept closer
together.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 07/17/2018 09:12 PM, Peter Eisentraut wrote:
On 17.07.18 00:04, Jerry Jelinek wrote:
There have been quite a few comments since last week, so at this point I
am uncertain how to proceed with this change. I don't think I saw
anything concrete in the recent emails that I can act upon.The outcome of this could be multiple orthogonal patches that affect the
WAL file allocation behavior somehow. I think your original idea of
skipping recycling on a COW file system is sound. But I would rather
frame the option as "preallocating files is obviously useless on a COW
file system" rather than "this will make things mysteriously faster with
uncertain trade-offs".
Makes sense, I guess. But I think many claims made in this thread are
mostly just assumptions at this point, based on our beliefs how CoW or
non-CoW filesystems work. The results from ZFS (showing positive impact)
are an exception, but that's about it. I'm sure those claims are based
on real-world experience and are likely true, but it'd be good to have
data from a wider range of filesystems / configurations etc. so that we
can give better recommendations to users, for example.
That's something I can help with, assuming we agree on what tests we
want to do. I'd say the usual batter of write-only pgbench tests with
different scales (fits into s_b, fits into RAM, larger then RAM) on
common Linux filesystems (ext4, xfs, btrfs) and zfsonlinux, and
different types of storage would be enough. I don't have any freebsd box
available, unfortunately.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
The actual implementation could use another round of consideration. I
wonder how this should interact with min_wal_size. Wouldn't
min_wal_size = 0 already do what we need (if you could set it to 0,
which is currently not possible)?
Hmm, would that actually disable recycling, or just make it happen only rarely?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
I've gotten a wide variety of feedback on the proposed patch. The comments
range from rough approval through various discussion about alternative
solutions. At this point I am unsure if this patch is rejected or if it
would be accepted once I had the updated man page changes that were
discussed last week.
I have attached an updated patch which does incorporate man page changes,
in case that is the blocker. However, if this patch is simply rejected, I'd
appreciate it if I could get a definitive statement to that effect.
Thanks,
Jerry
On Tue, Jun 26, 2018 at 7:35 AM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:
Show quoted text
Hello All,
Attached is a patch to provide an option to disable WAL recycling. We have
found that this can help performance by eliminating read-modify-write
behavior on old WAL files that are no longer resident in the filesystem
cache. The is a lot more detail on the background of the motivation for
this in the following thread./messages/by-id/CACukRjO7DJvub8e2AijOayj8BfKK3
XXBTwu3KKARiTr67M3E3w%40mail.gmail.com#CACukRjO7DJvub8e2AijOayj8BfKK3
XXBTwu3KKARiTr67M3E3w@mail.gmail.comA similar change has been tested against our 9.6 branch that we're
currently running, but the attached patch is against master.Thanks,
Jerry
Attachments:
0001-option-to-disable-WAL-recycling.patchapplication/octet-stream; name=0001-option-to-disable-WAL-recycling.patchDownload
From 296526331641721f16246d4bfa6b2c3818a5a235 Mon Sep 17 00:00:00 2001
From: Jerry Jelinek <jerry.jelinek@joyent.com>
Date: Wed, 18 Jul 2018 18:57:14 +0000
Subject: [PATCH] option to disable WAL recycling
---
doc/src/sgml/config.sgml | 23 +++++++++++++++++++++++
src/backend/access/transam/xlog.c | 3 ++-
src/backend/utils/misc/guc.c | 10 ++++++++++
src/backend/utils/misc/postgresql.conf.sample | 1 +
src/include/access/xlog.h | 1 +
5 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 4d48d93..4e8c8eb 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3116,6 +3116,29 @@ include_dir 'conf.d'
</listitem>
</varlistentry>
+ <varlistentry id="guc-wal-recycle" xreflabel="wal_recycle">
+ <term><varname>wal_recycle</varname> (<type>boolean</type>)
+ <indexterm>
+ <primary><varname>wal_recycle</varname> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ When this parameter is <literal>on</literal>, past log file segments
+ in the <filename>pg_wal</filename> directory are recycled for future
+ use. This is the default, and it is appropriate for filesystems which
+ reuse the same disk blocks on write. On these filesystems, this setting
+ helps to ensure reliable operation if the filesystem fills up.
+ </para>
+
+ <para>
+ Turning this parameter off causes past log file segments to be deleted
+ when no longer needed. This setting is only appropriate for
+ copy-on-write filesystems which allocate new disk blocks on every write.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="guc-wal-sender-timeout" xreflabel="wal_sender_timeout">
<term><varname>wal_sender_timeout</varname> (<type>integer</type>)
<indexterm>
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 3ee6d5c..8abe5f9 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -99,6 +99,7 @@ bool wal_log_hints = false;
bool wal_compression = false;
char *wal_consistency_checking_string = NULL;
bool *wal_consistency_checking = NULL;
+bool wal_recycle = true;
bool log_checkpoints = false;
int sync_method = DEFAULT_SYNC_METHOD;
int wal_level = WAL_LEVEL_MINIMAL;
@@ -4054,7 +4055,7 @@ RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
* segment. Only recycle normal files, pg_standby for example can create
* symbolic links pointing to a separate archive directory.
*/
- if (endlogSegNo <= recycleSegNo &&
+ if (wal_recycle && endlogSegNo <= recycleSegNo &&
lstat(path, &statbuf) == 0 && S_ISREG(statbuf.st_mode) &&
InstallXLogFileSegment(&endlogSegNo, path,
true, recycleSegNo, true))
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index a88ea6c..af9c6f5 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1121,6 +1121,16 @@ static struct config_bool ConfigureNamesBool[] =
},
{
+ {"wal_recycle", PGC_SUSET, WAL_SETTINGS,
+ gettext_noop("WAL recycling enabled."),
+ NULL
+ },
+ &wal_recycle,
+ true,
+ NULL, NULL, NULL
+ },
+
+ {
{"log_checkpoints", PGC_SIGHUP, LOGGING_WHAT,
gettext_noop("Logs each checkpoint."),
NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index c0d3fb8..1629b19 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -198,6 +198,7 @@
#wal_compression = off # enable compression of full-page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
+#wal_recycle = off # do not recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index 421ba6d..cf13f12 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -106,6 +106,7 @@ extern bool EnableHotStandby;
extern bool fullPageWrites;
extern bool wal_log_hints;
extern bool wal_compression;
+extern bool wal_recycle;
extern bool *wal_consistency_checking;
extern char *wal_consistency_checking_string;
extern bool log_checkpoints;
--
2.2.1
On Wed, Jul 18, 2018 at 3:22 PM, Jerry Jelinek <jerry.jelinek@joyent.com> wrote:
I've gotten a wide variety of feedback on the proposed patch. The comments
range from rough approval through various discussion about alternative
solutions. At this point I am unsure if this patch is rejected or if it
would be accepted once I had the updated man page changes that were
discussed last week.I have attached an updated patch which does incorporate man page changes, in
case that is the blocker. However, if this patch is simply rejected, I'd
appreciate it if I could get a definitive statement to that effect.
1. There's no such thing as a definitive statement of the community's
opinion, generally speaking, because as a rule the community consists
of many different people who rarely all agree on anything but the most
uncontroversial of topics. We could probably all agree that the sun
rises in the East, or at least has historically done so, and that,
say, typos are bad.
2. You can't really expect somebody else to do the work of forging
consensus on your behalf. Sure, that may happen, if somebody else
takes an interest in the problem. But, really, since you started the
thread, most likely you're the one most interested. If you're not
willing to take the time to discuss the issues with the individual
people who have responded, promote your own views, investigate
proposed alternatives, etc., it's unlikely anybody else is going to do
it.
3. It's not unusual for a patch of this complexity to take months to
get committed; it's only been a few weeks. If it's important to you,
don't give up now.
It seems to me that there are several people in favor of this patch,
some others with questions and concerns, and pretty much nobody
adamantly opposed. So I would guess that this has pretty good odds in
the long run. But you're not going to get anywhere by pushing for a
commit-or-reject-right-now. It's been less than 24 hours since Tomas
proposed to do further benchmarking if we could agree on what to test
(you haven't made any suggestions in response) and it's also been less
than 24 hours since Peter and I both sent emails about whether it
should be controlled by its own GUC or in some other way. The
discussion is very much actively continuing. It's too soon to decide
on the conclusion, but it would be a good idea for you to keep
participating.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Tue, Jul 17, 2018 at 4:47 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
Makes sense, I guess. But I think many claims made in this thread are
mostly just assumptions at this point, based on our beliefs how CoW or
non-CoW filesystems work. The results from ZFS (showing positive impact)
are an exception, but that's about it. I'm sure those claims are based
on real-world experience and are likely true, but it'd be good to have
data from a wider range of filesystems / configurations etc. so that we
can give better recommendations to users, for example.
I agree that there's a lot of assuming going on.
That's something I can help with, assuming we agree on what tests we
want to do. I'd say the usual batter of write-only pgbench tests with
different scales (fits into s_b, fits into RAM, larger then RAM) on
common Linux filesystems (ext4, xfs, btrfs) and zfsonlinux, and
different types of storage would be enough. I don't have any freebsd box
available, unfortunately.
Those sound like reasonable tests. I also don't think we need to have
perfect recommendations. Some general guidance is good enough for a
start and we can refine it as we know more. IMHO, anyway.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
At Tue, 17 Jul 2018 21:01:03 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+Tgmob0hs=eZ7RquTLzYUwAuHtgORvPxjNXgifZ04he-JK7Rw@mail.gmail.com>
On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:The actual implementation could use another round of consideration. I
wonder how this should interact with min_wal_size. Wouldn't
min_wal_size = 0 already do what we need (if you could set it to 0,
which is currently not possible)?Hmm, would that actually disable recycling, or just make it happen only rarely?
It doens't. Instead setting max_wal_size smaller than checkpoint
interval should do that.
While considering this, I found a bug in 4b0d28de06, which
removed prior checkpoint from control file. It actually trims the
segments before the last checkpoint's redo segment but recycling
is still considered based on the *prevous* checkpoint. As the
result min_wal_size doesn't work as told. Specifically, setting
min/max_wal_size to 48MB and advance four or more segments then
two checkpoints leaves just one segment, which is less than
min_wal_size.
The attached patch fixes that. One arguable point on this would
be the removal of the behavior when RemoveXLogFile(name,
InvalidXLogRecPtr, ..).
The only place calling the function with the parameter is
timeline switching. Previously unconditionally 10 segments are
recycled after switchpoint but the reason for the behavior is we
didn't have the information on previous checkpoint at hand at the
time. But now we can use the timeline switch point as the
approximate of the last checkpoint's redo point and this allows
us to use min/max_wal_size properly at the time.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachments:
0001-Fix-calculation-base-of-WAL-recycling.patchtext/x-patch; charset=us-asciiDownload
From 2a59a0fb21c0272a445fe7f05fb68ea1aafb3e21 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
Date: Thu, 19 Jul 2018 12:13:56 +0900
Subject: [PATCH] Fix calculation base of WAL recycling
The commit 4b0d28de06 removed the prior checkpoint and related things
but that leaves WAL recycling based on the prior checkpoint. This
makes max_wal_size and min_wal_size work incorrectly. This patch makes
WAL recycling be based on the last checkpoint.
---
src/backend/access/transam/xlog.c | 37 +++++++++++++++++--------------------
1 file changed, 17 insertions(+), 20 deletions(-)
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 4049deb968..fdc21df122 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -2287,7 +2287,7 @@ assign_checkpoint_completion_target(double newval, void *extra)
* XLOG segments? Returns the highest segment that should be preallocated.
*/
static XLogSegNo
-XLOGfileslop(XLogRecPtr PriorRedoPtr)
+XLOGfileslop(XLogRecPtr RedoRecPtr)
{
XLogSegNo minSegNo;
XLogSegNo maxSegNo;
@@ -2299,9 +2299,9 @@ XLOGfileslop(XLogRecPtr PriorRedoPtr)
* correspond to. Always recycle enough segments to meet the minimum, and
* remove enough segments to stay below the maximum.
*/
- minSegNo = PriorRedoPtr / wal_segment_size +
+ minSegNo = RedoRecPtr / wal_segment_size +
ConvertToXSegs(min_wal_size_mb, wal_segment_size) - 1;
- maxSegNo = PriorRedoPtr / wal_segment_size +
+ maxSegNo = RedoRecPtr / wal_segment_size +
ConvertToXSegs(max_wal_size_mb, wal_segment_size) - 1;
/*
@@ -2316,7 +2316,7 @@ XLOGfileslop(XLogRecPtr PriorRedoPtr)
/* add 10% for good measure. */
distance *= 1.10;
- recycleSegNo = (XLogSegNo) ceil(((double) PriorRedoPtr + distance) /
+ recycleSegNo = (XLogSegNo) ceil(((double) RedoRecPtr + distance) /
wal_segment_size);
if (recycleSegNo < minSegNo)
@@ -3896,12 +3896,12 @@ RemoveTempXlogFiles(void)
/*
* Recycle or remove all log files older or equal to passed segno.
*
- * endptr is current (or recent) end of xlog, and PriorRedoRecPtr is the
- * redo pointer of the previous checkpoint. These are used to determine
+ * endptr is current (or recent) end of xlog, and RedoRecPtr is the
+ * redo pointer of the last checkpoint. These are used to determine
* whether we want to recycle rather than delete no-longer-wanted log files.
*/
static void
-RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
+RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr RedoRecPtr, XLogRecPtr endptr)
{
DIR *xldir;
struct dirent *xlde;
@@ -3944,7 +3944,7 @@ RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
/* Update the last removed location in shared memory first */
UpdateLastRemovedPtr(xlde->d_name);
- RemoveXlogFile(xlde->d_name, PriorRedoPtr, endptr);
+ RemoveXlogFile(xlde->d_name, RedoRecPtr, endptr);
}
}
}
@@ -4006,9 +4006,11 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI)
* remove it yet. It should be OK to remove it - files that are
* not part of our timeline history are not required for recovery
* - but seems safer to let them be archived and removed later.
+ * Recycling based on the point gives good approximate since we
+ * have just done timeline switching.
*/
if (!XLogArchiveIsReady(xlde->d_name))
- RemoveXlogFile(xlde->d_name, InvalidXLogRecPtr, switchpoint);
+ RemoveXlogFile(xlde->d_name, switchpoint, switchpoint);
}
}
@@ -4018,14 +4020,12 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI)
/*
* Recycle or remove a log file that's no longer needed.
*
- * endptr is current (or recent) end of xlog, and PriorRedoRecPtr is the
- * redo pointer of the previous checkpoint. These are used to determine
+ * endptr is current (or recent) end of xlog, and RedoRecPtr is the
+ * redo pointer of the last checkpoint. These are used to determine
* whether we want to recycle rather than delete no-longer-wanted log files.
- * If PriorRedoRecPtr is not known, pass invalid, and the function will
- * recycle, somewhat arbitrarily, 10 future segments.
*/
static void
-RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
+RemoveXlogFile(const char *segname, XLogRecPtr RedoRecPtr, XLogRecPtr endptr)
{
char path[MAXPGPATH];
#ifdef WIN32
@@ -4039,10 +4039,7 @@ RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
* Initialize info about where to try to recycle to.
*/
XLByteToSeg(endptr, endlogSegNo, wal_segment_size);
- if (PriorRedoPtr == InvalidXLogRecPtr)
- recycleSegNo = endlogSegNo + 10;
- else
- recycleSegNo = XLOGfileslop(PriorRedoPtr);
+ recycleSegNo = XLOGfileslop(RedoRecPtr);
snprintf(path, MAXPGPATH, XLOGDIR "/%s", segname);
@@ -9057,7 +9054,7 @@ CreateCheckPoint(int flags)
XLByteToSeg(RedoRecPtr, _logSegNo, wal_segment_size);
KeepLogSeg(recptr, &_logSegNo);
_logSegNo--;
- RemoveOldXlogFiles(_logSegNo, PriorRedoPtr, recptr);
+ RemoveOldXlogFiles(_logSegNo, RedoRecPtr, recptr);
}
/*
@@ -9410,7 +9407,7 @@ CreateRestartPoint(int flags)
if (RecoveryInProgress())
ThisTimeLineID = replayTLI;
- RemoveOldXlogFiles(_logSegNo, PriorRedoPtr, endptr);
+ RemoveOldXlogFiles(_logSegNo, RedoRecPtr, endptr);
/*
* Make more log segments if needed. (Do this after recycling old log
--
2.16.3
At Thu, 19 Jul 2018 12:37:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20180719.123726.00899102.horiguchi.kyotaro@lab.ntt.co.jp>
While considering this, I found a bug in 4b0d28de06, which
removed prior checkpoint from control file. It actually trims the
segments before the last checkpoint's redo segment but recycling
is still considered based on the *prevous* checkpoint. As the
result min_wal_size doesn't work as told. Specifically, setting
min/max_wal_size to 48MB and advance four or more segments then
two checkpoints leaves just one segment, which is less than
min_wal_size.The attached patch fixes that. One arguable point on this would
be the removal of the behavior when RemoveXLogFile(name,
InvalidXLogRecPtr, ..).The only place calling the function with the parameter is
timeline switching. Previously unconditionally 10 segments are
recycled after switchpoint but the reason for the behavior is we
didn't have the information on previous checkpoint at hand at the
time. But now we can use the timeline switch point as the
approximate of the last checkpoint's redo point and this allows
us to use min/max_wal_size properly at the time.
Fixed a comment in the patch, which was unreadable.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachments:
v2-0001-Fix-calculation-base-of-WAL-recycling.patchtext/x-patch; charset=us-asciiDownload
From f2b1a0b6360263d4ddf725075daf4b56800e3e18 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
Date: Thu, 19 Jul 2018 12:13:56 +0900
Subject: [PATCH] Fix calculation base of WAL recycling
The commit 4b0d28de06 removed the prior checkpoint and related things
but that leaves WAL recycling based on the prior checkpoint. This
makes max_wal_size and min_wal_size work incorrectly. This patch makes
WAL recycling be based on the last checkpoint.
---
src/backend/access/transam/xlog.c | 37 +++++++++++++++++--------------------
1 file changed, 17 insertions(+), 20 deletions(-)
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 4049deb968..d7a61af8f1 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -2287,7 +2287,7 @@ assign_checkpoint_completion_target(double newval, void *extra)
* XLOG segments? Returns the highest segment that should be preallocated.
*/
static XLogSegNo
-XLOGfileslop(XLogRecPtr PriorRedoPtr)
+XLOGfileslop(XLogRecPtr RedoRecPtr)
{
XLogSegNo minSegNo;
XLogSegNo maxSegNo;
@@ -2299,9 +2299,9 @@ XLOGfileslop(XLogRecPtr PriorRedoPtr)
* correspond to. Always recycle enough segments to meet the minimum, and
* remove enough segments to stay below the maximum.
*/
- minSegNo = PriorRedoPtr / wal_segment_size +
+ minSegNo = RedoRecPtr / wal_segment_size +
ConvertToXSegs(min_wal_size_mb, wal_segment_size) - 1;
- maxSegNo = PriorRedoPtr / wal_segment_size +
+ maxSegNo = RedoRecPtr / wal_segment_size +
ConvertToXSegs(max_wal_size_mb, wal_segment_size) - 1;
/*
@@ -2316,7 +2316,7 @@ XLOGfileslop(XLogRecPtr PriorRedoPtr)
/* add 10% for good measure. */
distance *= 1.10;
- recycleSegNo = (XLogSegNo) ceil(((double) PriorRedoPtr + distance) /
+ recycleSegNo = (XLogSegNo) ceil(((double) RedoRecPtr + distance) /
wal_segment_size);
if (recycleSegNo < minSegNo)
@@ -3896,12 +3896,12 @@ RemoveTempXlogFiles(void)
/*
* Recycle or remove all log files older or equal to passed segno.
*
- * endptr is current (or recent) end of xlog, and PriorRedoRecPtr is the
- * redo pointer of the previous checkpoint. These are used to determine
+ * endptr is current (or recent) end of xlog, and RedoRecPtr is the
+ * redo pointer of the last checkpoint. These are used to determine
* whether we want to recycle rather than delete no-longer-wanted log files.
*/
static void
-RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
+RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr RedoRecPtr, XLogRecPtr endptr)
{
DIR *xldir;
struct dirent *xlde;
@@ -3944,7 +3944,7 @@ RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
/* Update the last removed location in shared memory first */
UpdateLastRemovedPtr(xlde->d_name);
- RemoveXlogFile(xlde->d_name, PriorRedoPtr, endptr);
+ RemoveXlogFile(xlde->d_name, RedoRecPtr, endptr);
}
}
}
@@ -4006,9 +4006,11 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI)
* remove it yet. It should be OK to remove it - files that are
* not part of our timeline history are not required for recovery
* - but seems safer to let them be archived and removed later.
+ * Here, switchpoint is a good approximate of RedoRecPtr for
+ * RemoveXlogFile since we have just done timeline switching.
*/
if (!XLogArchiveIsReady(xlde->d_name))
- RemoveXlogFile(xlde->d_name, InvalidXLogRecPtr, switchpoint);
+ RemoveXlogFile(xlde->d_name, switchpoint, switchpoint);
}
}
@@ -4018,14 +4020,12 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI)
/*
* Recycle or remove a log file that's no longer needed.
*
- * endptr is current (or recent) end of xlog, and PriorRedoRecPtr is the
- * redo pointer of the previous checkpoint. These are used to determine
+ * endptr is current (or recent) end of xlog, and RedoRecPtr is the
+ * redo pointer of the last checkpoint. These are used to determine
* whether we want to recycle rather than delete no-longer-wanted log files.
- * If PriorRedoRecPtr is not known, pass invalid, and the function will
- * recycle, somewhat arbitrarily, 10 future segments.
*/
static void
-RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
+RemoveXlogFile(const char *segname, XLogRecPtr RedoRecPtr, XLogRecPtr endptr)
{
char path[MAXPGPATH];
#ifdef WIN32
@@ -4039,10 +4039,7 @@ RemoveXlogFile(const char *segname, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)
* Initialize info about where to try to recycle to.
*/
XLByteToSeg(endptr, endlogSegNo, wal_segment_size);
- if (PriorRedoPtr == InvalidXLogRecPtr)
- recycleSegNo = endlogSegNo + 10;
- else
- recycleSegNo = XLOGfileslop(PriorRedoPtr);
+ recycleSegNo = XLOGfileslop(RedoRecPtr);
snprintf(path, MAXPGPATH, XLOGDIR "/%s", segname);
@@ -9057,7 +9054,7 @@ CreateCheckPoint(int flags)
XLByteToSeg(RedoRecPtr, _logSegNo, wal_segment_size);
KeepLogSeg(recptr, &_logSegNo);
_logSegNo--;
- RemoveOldXlogFiles(_logSegNo, PriorRedoPtr, recptr);
+ RemoveOldXlogFiles(_logSegNo, RedoRecPtr, recptr);
}
/*
@@ -9410,7 +9407,7 @@ CreateRestartPoint(int flags)
if (RecoveryInProgress())
ThisTimeLineID = replayTLI;
- RemoveOldXlogFiles(_logSegNo, PriorRedoPtr, endptr);
+ RemoveOldXlogFiles(_logSegNo, RedoRecPtr, endptr);
/*
* Make more log segments if needed. (Do this after recycling old log
--
2.16.3
At Thu, 19 Jul 2018 12:37:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20180719.123726.00899102.horiguchi.kyotaro@lab.ntt.co.jp>
At Tue, 17 Jul 2018 21:01:03 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+Tgmob0hs=eZ7RquTLzYUwAuHtgORvPxjNXgifZ04he-JK7Rw@mail.gmail.com>
On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:The actual implementation could use another round of consideration. I
wonder how this should interact with min_wal_size. Wouldn't
min_wal_size = 0 already do what we need (if you could set it to 0,
which is currently not possible)?Hmm, would that actually disable recycling, or just make it happen only rarely?
It doens't. Instead setting max_wal_size smaller than checkpoint
interval should do that.
And that's wrong. It makes checkpoint unreasonably frequent.
My result is that we cannot disable recycling perfectly just by
setting min/max_wal_size.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
At Thu, 19 Jul 2018 12:59:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20180719.125926.257896670.horiguchi.kyotaro@lab.ntt.co.jp>
At Thu, 19 Jul 2018 12:37:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20180719.123726.00899102.horiguchi.kyotaro@lab.ntt.co.jp>
At Tue, 17 Jul 2018 21:01:03 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+Tgmob0hs=eZ7RquTLzYUwAuHtgORvPxjNXgifZ04he-JK7Rw@mail.gmail.com>
On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:The actual implementation could use another round of consideration. I
wonder how this should interact with min_wal_size. Wouldn't
min_wal_size = 0 already do what we need (if you could set it to 0,
which is currently not possible)?Hmm, would that actually disable recycling, or just make it happen only rarely?
It doens't. Instead setting max_wal_size smaller than checkpoint
interval should do that.And that's wrong. It makes checkpoint unreasonably frequent.
My result is that we cannot disable recycling perfectly just by
setting min/max_wal_size.
s/result/conclusion/;
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Hi Robert,
I'm new to the Postgresql community, so I'm not familiar with how patches
are accepted here. Thanks for your detailed explanation. I do want to keep
pushing on this. I'll respond separately to Peter and to Tomas regarding
their emails.
Thanks again,
Jerry
On Wed, Jul 18, 2018 at 1:43 PM, Robert Haas <robertmhaas@gmail.com> wrote:
Show quoted text
On Wed, Jul 18, 2018 at 3:22 PM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:I've gotten a wide variety of feedback on the proposed patch. The
comments
range from rough approval through various discussion about alternative
solutions. At this point I am unsure if this patch is rejected or if it
would be accepted once I had the updated man page changes that were
discussed last week.I have attached an updated patch which does incorporate man page
changes, in
case that is the blocker. However, if this patch is simply rejected, I'd
appreciate it if I could get a definitive statement to that effect.1. There's no such thing as a definitive statement of the community's
opinion, generally speaking, because as a rule the community consists
of many different people who rarely all agree on anything but the most
uncontroversial of topics. We could probably all agree that the sun
rises in the East, or at least has historically done so, and that,
say, typos are bad.2. You can't really expect somebody else to do the work of forging
consensus on your behalf. Sure, that may happen, if somebody else
takes an interest in the problem. But, really, since you started the
thread, most likely you're the one most interested. If you're not
willing to take the time to discuss the issues with the individual
people who have responded, promote your own views, investigate
proposed alternatives, etc., it's unlikely anybody else is going to do
it.3. It's not unusual for a patch of this complexity to take months to
get committed; it's only been a few weeks. If it's important to you,
don't give up now.It seems to me that there are several people in favor of this patch,
some others with questions and concerns, and pretty much nobody
adamantly opposed. So I would guess that this has pretty good odds in
the long run. But you're not going to get anywhere by pushing for a
commit-or-reject-right-now. It's been less than 24 hours since Tomas
proposed to do further benchmarking if we could agree on what to test
(you haven't made any suggestions in response) and it's also been less
than 24 hours since Peter and I both sent emails about whether it
should be controlled by its own GUC or in some other way. The
discussion is very much actively continuing. It's too soon to decide
on the conclusion, but it would be a good idea for you to keep
participating.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Peter,
Thanks for your feedback. I'm happy to change the name of the tunable or to
update the man page in any way. I have already posted an updated patch
with changes to the man page which I think may address your concerns there,
but please let me know if that still needs more work. It looks like Kyotaro
already did some exploration, and tuning the min/max for the WAL size won't
solve this problem. Just let me know if there is anything else here which
you think I should look into.
Thanks again,
Jerry
On Tue, Jul 17, 2018 at 1:12 PM, Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> wrote:
Show quoted text
On 17.07.18 00:04, Jerry Jelinek wrote:
There have been quite a few comments since last week, so at this point I
am uncertain how to proceed with this change. I don't think I saw
anything concrete in the recent emails that I can act upon.The outcome of this could be multiple orthogonal patches that affect the
WAL file allocation behavior somehow. I think your original idea of
skipping recycling on a COW file system is sound. But I would rather
frame the option as "preallocating files is obviously useless on a COW
file system" rather than "this will make things mysteriously faster with
uncertain trade-offs".The actual implementation could use another round of consideration. I
wonder how this should interact with min_wal_size. Wouldn't
min_wal_size = 0 already do what we need (if you could set it to 0,
which is currently not possible)? Should the new setting be something
like min_wal_size = -1? Or even if it's a new setting, it might be
better to act on it in XLOGfileslop(), so these things are kept closer
together.--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Thomas,
Thanks for your offer to run some tests on different OSes and filesystems
that you have. Anything you can provide here would be much appreciated. I
don't have anything other than our native SmartOS/ZFS based configurations,
but I might be able to setup some VMs and get results that way. I should be
able to setup a VM running FreeBSD. If you have a chance to collect some
data, just let me know the exact benchmarks you ran and I'll run the same
things on the FreeBSD VM. Obviously you're under no obligation to do any of
this, so if you don't have time, just let me know and I'll see what I can
do on my own.
Thanks again,
Jerry
On Tue, Jul 17, 2018 at 2:47 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:
Show quoted text
On 07/17/2018 09:12 PM, Peter Eisentraut wrote:
On 17.07.18 00:04, Jerry Jelinek wrote:
There have been quite a few comments since last week, so at this point I
am uncertain how to proceed with this change. I don't think I saw
anything concrete in the recent emails that I can act upon.The outcome of this could be multiple orthogonal patches that affect the
WAL file allocation behavior somehow. I think your original idea of
skipping recycling on a COW file system is sound. But I would rather
frame the option as "preallocating files is obviously useless on a COW
file system" rather than "this will make things mysteriously faster with
uncertain trade-offs".Makes sense, I guess. But I think many claims made in this thread are
mostly just assumptions at this point, based on our beliefs how CoW or
non-CoW filesystems work. The results from ZFS (showing positive impact)
are an exception, but that's about it. I'm sure those claims are based
on real-world experience and are likely true, but it'd be good to have
data from a wider range of filesystems / configurations etc. so that we
can give better recommendations to users, for example.That's something I can help with, assuming we agree on what tests we
want to do. I'd say the usual batter of write-only pgbench tests with
different scales (fits into s_b, fits into RAM, larger then RAM) on
common Linux filesystems (ext4, xfs, btrfs) and zfsonlinux, and
different types of storage would be enough. I don't have any freebsd box
available, unfortunately.regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 07/21/2018 12:04 AM, Jerry Jelinek wrote:
Thomas,
Thanks for your offer to run some tests on different OSes and
filesystems that you have. Anything you can provide here would be much
appreciated. I don't have anything other than our native SmartOS/ZFS
based configurations, but I might be able to setup some VMs and get
results that way. I should be able to setup a VM running FreeBSD. If you
have a chance to collect some data, just let me know the exact
benchmarks you ran and I'll run the same things on the FreeBSD VM.
Obviously you're under no obligation to do any of this, so if you don't
have time, just let me know and I'll see what I can do on my own.
Sounds good. I plan to start with the testing in a couple of days - the
boxes are currently running some other tests at the moment. Once I have
some numbers I'll share them here, along with the test scripts etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I've setup FreeBSD 11.1 in a VM and I setup a ZFS filesystem to use for the
Postgres DB. I ran the following simple benchmark.
pgbench -M prepared -c 4 -j 4 -T 60 postgres
Since it is in a VM and I can't control what else might be happening on the
box, I ran this several times at different times of the day and averaged
the results. Here is the average TPS and latency with WAL recycling on (the
default) and off.
recycling on
avg tps: 407.4
avg lat: 9.8
recycling off
avg tps: 425.7
avg lat: 9.4 ms
Given my uncertainty about what else is running on the box, I think it is
reasonable to say these are essentially equal, but I can collect more data
across more different times if necessary. I'm also happy to collect more
data if people have suggestions for different parameters on the pgbench run.
Thanks,
Jerry
On Fri, Jul 20, 2018 at 4:04 PM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:
Show quoted text
Thomas,
Thanks for your offer to run some tests on different OSes and filesystems
that you have. Anything you can provide here would be much appreciated. I
don't have anything other than our native SmartOS/ZFS based configurations,
but I might be able to setup some VMs and get results that way. I should be
able to setup a VM running FreeBSD. If you have a chance to collect some
data, just let me know the exact benchmarks you ran and I'll run the same
things on the FreeBSD VM. Obviously you're under no obligation to do any of
this, so if you don't have time, just let me know and I'll see what I can
do on my own.Thanks again,
JerryOn Tue, Jul 17, 2018 at 2:47 PM, Tomas Vondra <
tomas.vondra@2ndquadrant.com> wrote:On 07/17/2018 09:12 PM, Peter Eisentraut wrote:
On 17.07.18 00:04, Jerry Jelinek wrote:
There have been quite a few comments since last week, so at this point
I
am uncertain how to proceed with this change. I don't think I saw
anything concrete in the recent emails that I can act upon.The outcome of this could be multiple orthogonal patches that affect the
WAL file allocation behavior somehow. I think your original idea of
skipping recycling on a COW file system is sound. But I would rather
frame the option as "preallocating files is obviously useless on a COW
file system" rather than "this will make things mysteriously faster with
uncertain trade-offs".Makes sense, I guess. But I think many claims made in this thread are
mostly just assumptions at this point, based on our beliefs how CoW or
non-CoW filesystems work. The results from ZFS (showing positive impact)
are an exception, but that's about it. I'm sure those claims are based
on real-world experience and are likely true, but it'd be good to have
data from a wider range of filesystems / configurations etc. so that we
can give better recommendations to users, for example.That's something I can help with, assuming we agree on what tests we
want to do. I'd say the usual batter of write-only pgbench tests with
different scales (fits into s_b, fits into RAM, larger then RAM) on
common Linux filesystems (ext4, xfs, btrfs) and zfsonlinux, and
different types of storage would be enough. I don't have any freebsd box
available, unfortunately.regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 19/07/2018 05:59, Kyotaro HORIGUCHI wrote:
My result is that we cannot disable recycling perfectly just by
setting min/max_wal_size.
Maybe the behavior of min_wal_size should be rethought? Elsewhere in
this thread, there was also a complaint that max_wal_size isn't actually
a max. It seems like there might be some interest in making these
settings more accurate.
I mean, what is the point of the min_wal_size setting if not controlling
this very thing?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
At Mon, 30 Jul 2018 10:43:20 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <d802e799-c699-01f7-906b-921f3b183be6@2ndquadrant.com>
On 19/07/2018 05:59, Kyotaro HORIGUCHI wrote:
My result is that we cannot disable recycling perfectly just by
setting min/max_wal_size.Maybe the behavior of min_wal_size should be rethought? Elsewhere in
this thread, there was also a complaint that max_wal_size isn't actually
a max. It seems like there might be some interest in making these
settings more accurate.I mean, what is the point of the min_wal_size setting if not controlling
this very thing?
Sorry, I have forgotten to mention it.
The definition of the variable is "We won't reduce segments to no
less than this segments (but in MB) even if we don't need such
many segments until the next checkpoint". I couldn't guess a
proper value for it to indicate the behaior that "I don't want to
keep (recycle) preallocated segments even for expected checkpint
interval.". In short, since I thought that it's not intuitive at
that time.
Reconsidering the candidate values:
0 seems to keep segments for the next checkpoit interval.
-1 seems that it just disables segment reduction (this is the
same with setting the same value with max_wal_size?)
Maybe we could -1 for this purpose.
guc.c
| {"min_wal_size", PGC_SIGHUP, WAL_CHECKPOINTS,
| gettext_noop("Sets the minimum size to shrink the WAL to."),
+ gettext_noop("-1 turns off WAL recycling."),
# This seems somewhat.. out-of-the-blue?
wal-configuraiton.html
| The number of WAL segment files in pg_wal directory depends on
| min_wal_size, max_wal_size and the amount of WAL generated in
| previous checkpoint cycles. When old log segment files are no
| longer needed, they are removed or recycled (that is, renamed
| to become future segments in the numbered sequence). If, due to
...
| extent. min_wal_size puts a minimum on the amount of WAL files
| recycled for future usage; that much WAL is always recycled for
| future use, even if the system is idle and the WAL usage
| estimate suggests that little WAL is needed.
+ If you don't need the recycling feature, setting -1 to
+ min_wal_size disables the feature and WAL files are created on
+ demand.
# I'm not sure this makes sense for readers.
Besides the above, I supppose that this also turns off
preallcoation of a whole segment at the first use, which could
cause problems here and there...
If we allowed a string value like 'no-prealloc' for min_wal_size,
it might be comprehensive?
# Sorry for the scattered thoughts
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
On Mon, Jul 30, 2018 at 4:43 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
On 19/07/2018 05:59, Kyotaro HORIGUCHI wrote:
My result is that we cannot disable recycling perfectly just by
setting min/max_wal_size.Maybe the behavior of min_wal_size should be rethought? Elsewhere in
this thread, there was also a complaint that max_wal_size isn't actually
a max. It seems like there might be some interest in making these
settings more accurate.I mean, what is the point of the min_wal_size setting if not controlling
this very thing?
See the logic in XLOGfileslop(). The number of segments that the
server recycles (by renaming) after a checkpoint is bounded to not
less than min_wal_size and not more than max_wal_size, but the actual
value fluctuates between those two extremes based on the number of
segments the server believes will be required before the next
checkpoint completes. Logically, min_wal_size = 0 would mean that the
number of recycled segments could be as small as zero. However, what
is being requested here is to force the number of recycled segments to
never be larger than zero, which is different.
As far as the log in XLOGfileslop() is concerned, that would
correspond to max_wal_size = 0, not min_wal_size = 0. However, that's
an impractical setting because max_wal_size is also used in other
places, like CalculateCheckpointSegments().
In other words, min_wal_size = 0 logically means that we MIGHT NOT
recycle any WAL segments, but the desired behavior here is that we DO
NOT recycle any WAL segments.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
After I posted my previous FreeBSD results, I had a private request to run
the test for a longer period and on a larger VM.
I setup a new 8 CPU, 16 GB VM. This is the largest I can create and is on a
different machine from the previous VM, so the results cannot be directly
compared. I reran the same pgbench run but for an hour. Here are the
aggregated results
recycling on
avg tps: 470.3
avg lat: 8.5
recycling off
avg tps: 472.4
avg lat: 8.5
I think this still shows that there is no regression on FreeBSD/ZFS with
WAL recycling off.
Thanks,
Jerry
On Fri, Jul 27, 2018 at 1:32 PM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:
Show quoted text
I've setup FreeBSD 11.1 in a VM and I setup a ZFS filesystem to use for
the Postgres DB. I ran the following simple benchmark.pgbench -M prepared -c 4 -j 4 -T 60 postgres
Since it is in a VM and I can't control what else might be happening on
the box, I ran this several times at different times of the day and
averaged the results. Here is the average TPS and latency with WAL
recycling on (the default) and off.recycling on
avg tps: 407.4
avg lat: 9.8recycling off
avg tps: 425.7
avg lat: 9.4 msGiven my uncertainty about what else is running on the box, I think it is
reasonable to say these are essentially equal, but I can collect more data
across more different times if necessary. I'm also happy to collect more
data if people have suggestions for different parameters on the pgbench run.Thanks,
JerryOn Fri, Jul 20, 2018 at 4:04 PM, Jerry Jelinek <jerry.jelinek@joyent.com>
wrote:Thomas,
Thanks for your offer to run some tests on different OSes and filesystems
that you have. Anything you can provide here would be much appreciated. I
don't have anything other than our native SmartOS/ZFS based configurations,
but I might be able to setup some VMs and get results that way. I should be
able to setup a VM running FreeBSD. If you have a chance to collect some
data, just let me know the exact benchmarks you ran and I'll run the same
things on the FreeBSD VM. Obviously you're under no obligation to do any of
this, so if you don't have time, just let me know and I'll see what I can
do on my own.Thanks again,
JerryOn Tue, Jul 17, 2018 at 2:47 PM, Tomas Vondra <
tomas.vondra@2ndquadrant.com> wrote:On 07/17/2018 09:12 PM, Peter Eisentraut wrote:
On 17.07.18 00:04, Jerry Jelinek wrote:
There have been quite a few comments since last week, so at this
point I
am uncertain how to proceed with this change. I don't think I saw
anything concrete in the recent emails that I can act upon.The outcome of this could be multiple orthogonal patches that affect
the
WAL file allocation behavior somehow. I think your original idea of
skipping recycling on a COW file system is sound. But I would rather
frame the option as "preallocating files is obviously useless on a COW
file system" rather than "this will make things mysteriously fasterwith
uncertain trade-offs".
Makes sense, I guess. But I think many claims made in this thread are
mostly just assumptions at this point, based on our beliefs how CoW or
non-CoW filesystems work. The results from ZFS (showing positive impact)
are an exception, but that's about it. I'm sure those claims are based
on real-world experience and are likely true, but it'd be good to have
data from a wider range of filesystems / configurations etc. so that we
can give better recommendations to users, for example.That's something I can help with, assuming we agree on what tests we
want to do. I'd say the usual batter of write-only pgbench tests with
different scales (fits into s_b, fits into RAM, larger then RAM) on
common Linux filesystems (ext4, xfs, btrfs) and zfsonlinux, and
different types of storage would be enough. I don't have any freebsd box
available, unfortunately.regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 07/22/2018 10:50 PM, Tomas Vondra wrote:
On 07/21/2018 12:04 AM, Jerry Jelinek wrote:
Thomas,
Thanks for your offer to run some tests on different OSes and
filesystems that you have. Anything you can provide here would be much
appreciated. I don't have anything other than our native SmartOS/ZFS
based configurations, but I might be able to setup some VMs and get
results that way. I should be able to setup a VM running FreeBSD. If you
have a chance to collect some data, just let me know the exact
benchmarks you ran and I'll run the same things on the FreeBSD VM.
Obviously you're under no obligation to do any of this, so if you don't
have time, just let me know and I'll see what I can do on my own.Sounds good. I plan to start with the testing in a couple of days - the
boxes are currently running some other tests at the moment. Once I have
some numbers I'll share them here, along with the test scripts etc.
I do have initial results from one of the boxes. It's not complete, and
further tests are still running, but I suppose it's worth sharing what I
have at this point.
As usual, the full data and ugly scripts are available in a git repo:
https://bitbucket.org/tvondra/wal-recycle-test-xeon/src/master/
Considering the WAL recycling only kicks in after a while, I've decided
to do a single long (6-hour) pgbench run for each scale, instead of the
usual "multiple short runs" approach.
So far I've tried on these filesystems:
* btrfs
* ext4 / delayed allocation enabled (default)
* ext4 / delayed allocation disabled
* xfs
The machine has 64GB of RAM, so I've chosen scales 200 (fits into
shared_buffers), 2000 (in RAM) and 8000 (exceeds RAM), to trigger
different I/O patterns. I've used the per-second aggregated logging,
with the raw data available in the git repo. The charts attached to this
message are per-minute tps averages, to demonstrate the overall impact
on throughtput which would otherwise be hidden in jitter.
All these tests are done on Optane 900P 280GB SSD, which is pretty nice
storage but the limited size is somewhat tight for the scale 8000 test.
For the traditional filesystems (ext4, xfs) the WAL recycling seems to
be clearly beneficial - for the in-memory datasets the difference seems
to be negligible, but for the largest scale it gives maybe +20% benefit.
The delalloc/nodellalloc on ext4 makes pretty much no difference here,
and both xfs and ext4 peform almost exactly the same here - the main
difference seems to be that on ext4 the largest scale ran out of disk
space while xfs managed to keep running. Clearly there's a difference in
free space management, but that's unrelated to this patch.
On BTRFS, the results on the two smaller scales show about the same
behavior (minimal difference between WAL recycling and not recycling),
except that the throughput is perhaps 25-50% of ext4/xfs. Fair enough, a
different type of filesystem, and LVM snapshots would likely have the
same impact. But no clear win with recycling disabled. On the largest
scale, the device ran out of space after 10-20 minutes, which makes it
impossible to make any reasonable conclusions :-(
I plan to do some more tests with zfsonlinux, and LVM with snapshots. I
wonder if those will show some benefit of disabling the WAL recycling.
And then, if time permits, I'll redo some of those tests with a small
SATA-based RAID array (aka spinning rust). Mostly out of curiosity.
FWIW I've planned to do these tests on another machine, but I've ran
into some strange data corruption issues on it, and I've spent quite a
bit of time investigating that and trying to reproduce it, which delayed
these tests a bit. And of course, once I added elog(PANIC) to the right
place it stopped happening :-/
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas,
Thanks for doing all of this testing. Your testing and results are much
more detailed than anything I did. Please let me know if there is any
follow-up that I should attempt.
Thanks again,
Jerry
On Thu, Aug 16, 2018 at 3:43 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:
Show quoted text
On 07/22/2018 10:50 PM, Tomas Vondra wrote:
On 07/21/2018 12:04 AM, Jerry Jelinek wrote:
Thomas,
Thanks for your offer to run some tests on different OSes and
filesystems that you have. Anything you can provide here would be much
appreciated. I don't have anything other than our native SmartOS/ZFS
based configurations, but I might be able to setup some VMs and get
results that way. I should be able to setup a VM running FreeBSD. If you
have a chance to collect some data, just let me know the exact
benchmarks you ran and I'll run the same things on the FreeBSD VM.
Obviously you're under no obligation to do any of this, so if you don't
have time, just let me know and I'll see what I can do on my own.Sounds good. I plan to start with the testing in a couple of days - the
boxes are currently running some other tests at the moment. Once I have
some numbers I'll share them here, along with the test scripts etc.I do have initial results from one of the boxes. It's not complete, and
further tests are still running, but I suppose it's worth sharing what I
have at this point.As usual, the full data and ugly scripts are available in a git repo:
https://bitbucket.org/tvondra/wal-recycle-test-xeon/src/master/
Considering the WAL recycling only kicks in after a while, I've decided
to do a single long (6-hour) pgbench run for each scale, instead of the
usual "multiple short runs" approach.So far I've tried on these filesystems:
* btrfs
* ext4 / delayed allocation enabled (default)
* ext4 / delayed allocation disabled
* xfsThe machine has 64GB of RAM, so I've chosen scales 200 (fits into
shared_buffers), 2000 (in RAM) and 8000 (exceeds RAM), to trigger
different I/O patterns. I've used the per-second aggregated logging,
with the raw data available in the git repo. The charts attached to this
message are per-minute tps averages, to demonstrate the overall impact
on throughtput which would otherwise be hidden in jitter.All these tests are done on Optane 900P 280GB SSD, which is pretty nice
storage but the limited size is somewhat tight for the scale 8000 test.For the traditional filesystems (ext4, xfs) the WAL recycling seems to
be clearly beneficial - for the in-memory datasets the difference seems
to be negligible, but for the largest scale it gives maybe +20% benefit.
The delalloc/nodellalloc on ext4 makes pretty much no difference here,
and both xfs and ext4 peform almost exactly the same here - the main
difference seems to be that on ext4 the largest scale ran out of disk
space while xfs managed to keep running. Clearly there's a difference in
free space management, but that's unrelated to this patch.On BTRFS, the results on the two smaller scales show about the same
behavior (minimal difference between WAL recycling and not recycling),
except that the throughput is perhaps 25-50% of ext4/xfs. Fair enough, a
different type of filesystem, and LVM snapshots would likely have the
same impact. But no clear win with recycling disabled. On the largest
scale, the device ran out of space after 10-20 minutes, which makes it
impossible to make any reasonable conclusions :-(I plan to do some more tests with zfsonlinux, and LVM with snapshots. I
wonder if those will show some benefit of disabling the WAL recycling.
And then, if time permits, I'll redo some of those tests with a small
SATA-based RAID array (aka spinning rust). Mostly out of curiosity.FWIW I've planned to do these tests on another machine, but I've ran
into some strange data corruption issues on it, and I've spent quite a
bit of time investigating that and trying to reproduce it, which delayed
these tests a bit. And of course, once I added elog(PANIC) to the right
place it stopped happening :-/regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2018-Aug-21, Jerry Jelinek wrote:
Tomas,
Thanks for doing all of this testing. Your testing and results are much
more detailed than anything I did. Please let me know if there is any
follow-up that I should attempt.
Either I completely misread these charts, or there is practically no
point to disabling WAL recycling (except on btrfs, but then nobody in
their right minds would use it for Postgres given these numbers anyway).
I suppose that the use case that was initially proposed (ZFS) has not
yet been tested so we shouldn't reject this patch immediately, but
perhaps what Joyent people should be doing now is running Tomas' test
script on ZFS and see what the results look like.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2018-08-22 11:06:17 -0300, Alvaro Herrera wrote:
On 2018-Aug-21, Jerry Jelinek wrote:
Tomas,
Thanks for doing all of this testing. Your testing and results are much
more detailed than anything I did. Please let me know if there is any
follow-up that I should attempt.Either I completely misread these charts, or there is practically no
point to disabling WAL recycling (except on btrfs, but then nobody in
their right minds would use it for Postgres given these numbers anyway).
I suppose that the use case that was initially proposed (ZFS) has not
yet been tested so we shouldn't reject this patch immediately, but
perhaps what Joyent people should be doing now is running Tomas' test
script on ZFS and see what the results look like.
IDK, I would see it less negatively. Yes, we should put a BIG FAT
warning to never use this on non COW filesystems. And IMO ZFS (and also
btrfs) sucks badly here, even though they really shouldn't. But given
the positive impact for zfs & btrfs, and the low code complexity, I
think it's not insane to provide this tunable.
Greetings,
Andres Freund
On 2018-Aug-22, Andres Freund wrote:
On 2018-08-22 11:06:17 -0300, Alvaro Herrera wrote:
I suppose that the use case that was initially proposed (ZFS) has not
yet been tested so we shouldn't reject this patch immediately, but
perhaps what Joyent people should be doing now is running Tomas' test
script on ZFS and see what the results look like.IDK, I would see it less negatively. Yes, we should put a BIG FAT
warning to never use this on non COW filesystems. And IMO ZFS (and also
btrfs) sucks badly here, even though they really shouldn't. But given
the positive impact for zfs & btrfs, and the low code complexity, I
think it's not insane to provide this tunable.
Yeah, but let's see some ZFS numbers first :-)
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Alvaro,
I have previously posted ZFS numbers for SmartOS and FreeBSD to this
thread, although not with the exact same benchmark runs that Tomas did.
I think the main purpose of running the benchmarks is to demonstrate that
there is no significant performance regression with wal recycling disabled
on a COW filesystem such as ZFS (which might just be intuitive for a COW
filesystem). I've tried to be sure it is clear in the doc change with this
patch that this tunable is only applicable to COW filesystems. I do not
think the benchmarks will be able to recreate the problematic performance
state that was originally described in Dave's email thread here:
/messages/by-id/CACukRjO7DJvub8e2AijOayj8BfKK3XXBTwu3KKARiTr67M3E3w@mail.gmail.com
Thanks,
Jerry
On Wed, Aug 22, 2018 at 8:41 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:
Show quoted text
On 2018-Aug-22, Andres Freund wrote:
On 2018-08-22 11:06:17 -0300, Alvaro Herrera wrote:
I suppose that the use case that was initially proposed (ZFS) has not
yet been tested so we shouldn't reject this patch immediately, but
perhaps what Joyent people should be doing now is running Tomas' test
script on ZFS and see what the results look like.IDK, I would see it less negatively. Yes, we should put a BIG FAT
warning to never use this on non COW filesystems. And IMO ZFS (and also
btrfs) sucks badly here, even though they really shouldn't. But given
the positive impact for zfs & btrfs, and the low code complexity, I
think it's not insane to provide this tunable.Yeah, but let's see some ZFS numbers first :-)
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 08/25/2018 12:11 AM, Jerry Jelinek wrote:
Alvaro,
I have previously posted ZFS numbers for SmartOS and FreeBSD to this
thread, although not with the exact same benchmark runs that Tomas did.I think the main purpose of running the benchmarks is to demonstrate
that there is no significant performance regression with wal recycling
disabled on a COW filesystem such as ZFS (which might just be intuitive
for a COW filesystem). I've tried to be sure it is clear in the doc
change with this patch that this tunable is only applicable to COW
filesystems. I do not think the benchmarks will be able to recreate the
problematic performance state that was originally described in Dave's
email thread here:/messages/by-id/CACukRjO7DJvub8e2AijOayj8BfKK3XXBTwu3KKARiTr67M3E3w@mail.gmail.com
I agree - the benchmarks are valuable both to show improvement and lack
of regression. I do have some numbers from LVM/ext4 (with snapshot
recreated every minute, to trigger COW-like behavior, and without the
snapshots), and from ZFS (on Linux, using zfsonlinux 0.7.9 on kernel
4.17.17).
Attached are PDFs with summary charts, more detailed results are
available at
https://bitbucket.org/tvondra/wal-recycle-test-xeon/src/master/
lvm/ext4 (no snapshots)
-----------------------
This pretty much behaves like plain ex4, at least for scales 200 and
2000. I don't have results for scale 8000, because the test ran out of
disk space (I've used part of the device for snapshots, and it was
enough to trigger the disk space issue).
lvm/ext4 (snapshots)
---------------------
On the smallest scale (200), there's no visible difference. On scale
2000 disabling WAL reuse gives about 10% improvement (21468 vs. 23517
tps), although it's not obvious from the chart. On the largest scale
(6000, to prevent the disk space issues) the improvement is about 10%
again, but it's much clearer.
zfs (Linux)
-----------
On scale 200, there's pretty much no difference. On scale 2000, the
throughput actually decreased a bit, by about 5% - from the chart it
seems disabling the WAL reuse somewhat amplifies impact of checkpoints,
for some reason.
I have no idea what happened at the largest scale (8000) - on master
there's a huge drop after ~120 minutes, which somewhat recovers at ~220
minutes (but not fully). Without WAL reuse there's no such drop,
although there seems to be some degradation after ~220 minutes (i.e. at
about the same time the master partially recovers. I'm not sure what to
think about this, I wonder if it might be caused by almost filling the
disk space, or something like that. I'm rerunning this with scale 600.
I'm also not sure how much can we extrapolate this to other ZFS configs
(I mean, this is a ZFS on a single SSD device, while I'd generally
expect ZFS on multiple devices, etc.).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
lvm-ext4-snapshots.pdfapplication/pdf; name=lvm-ext4-snapshots.pdfDownload
%PDF-1.4
%��������
2 0 obj
<</Length 3 0 R/Filter/FlateDecode>>
stream
x����n&�����60�wx| ��Y�%�a��7~����bSw�o���)�����"�V�����~����q�{��R��R�������������g��p�{���Y�����l������~o%��X�qiJj5_�������v\��Z�����c�����;�m�)�������RT���S����r��%��g�������������l��{_�5J�����1Xk�^h��%��Qz�����Z��Z���s�,��:�e�,�����Yz�u�����,����������{�H��������^�����up�����>��{;km�wj�����*�O�TrV��J��:�������u��fk+�O�u�~���n�� �]���q���o��W�~c����#�5����j��������~cn���[3�U���w�������]?�T���)6������#;!���R<��8�<RU/�q��c]���x�������WQ���W���{�6�x��k����h�md�-���Y�U�;��G�x�����e��C��bs�����=�J���yWF��2v���T��w�8���p�s�aL�-����+(�n���\z�����1���.�Zl��B�u�}�u��l�m�a�����[s���1�/�u�����F�Kc�x.Z�
s�:y7�0��
4+����Q��U���f�y��7P����P^)�9������X��s���sO��c��+�O>/D�U*����9��(O��c���cD�l�h�T?��;����L�V)��sO�@�ZB c���<} %������N���Y���������� Z��{�BW�J��{���c;��z�R5/�Z0{haX�JC�W5���y(��6[6*t��6�w3?�-�%�Sz`�����-�K�;��L_$�?H\� =D�j�z��=8���g��Q68�"b.A�������(�+�e���:�s���q5x�&*)H �S,Q^-��D��\/����iC�`d 1����(�^,wd�.��t��Q�Z)^hk�q��rKM1s�2��������W�)^���8����Lxf����B��� �?��j�PE<�J��}�����Y~��� '������j��w�D�nV��|�+LI�e��JQ<��E�]l��C)/���U�L
Eh�^�c�(���>q8M�F'`�Q�h�G�N:EK/?�9qg�9������H5��
�M-��@�Gi���<sDi!������T��D�Di��a=M-$/� u��f�,����`15�?f�%�Tb>��#2�I �:J��s[N��%����}�RV��R,K��cWD��8E�J�R�v��Cmqf �U�����
~Q��~�G�|[R���}�����$EP ��{[�&t�K�\��<����E�Iy�;:I��<��^�*x~IR-XdB�[�k�|�jE�C��l��l��<S�������`10QY�
��]~� R�'��R��4vO���3�"����
��d�P�%6m)9����xN�D����I�i�}Q��6IT]�a�)�T�[I_d&OR�I���8����T�r<�(M��������&`9��-� �bLu��FN��:�s�J����������$=h��`q��.=p
�q��<��P�:k�,CY���p�3���3Jv��}������cfYZ0m�h�� �+I
���@ed_uGo��BJg6�!, �!��C��x65`5��.G�����KsL���3��,�e}�y�MQ�aQ'&f���r���i\C93W�����s��K��Y��\�l�W�Q����4��$Mx���}r��^�[�H�M��=iBs�&���T��E�E�}<K�=��6K(m�%)��Yt��u�`�n�7�a�i��bY5�d�+�i���cfef��R����"d����s�C�26���i:_�x��s��������f~�(� �R4a��fL��4�!-��BE��i�j)�@fA�J��,x/Z����#�T�M����J:oE9�'���t�y���y�A
�d���2�M����0�{rV�T��=����Jh�%� ��Y!�<�����gZS�@D���~?���t1��P������������g�#�q�
p�r��K��������N�#�3'd������-�: ���'B#(�����x�E��h
��
*� V� ��Q'@v8����!Yh�iL������\�AT������ ����+���z�'!�F|"X���Bh���c\������<q"4��v0���03�h���h<�F�����*�6�G����PZ^P/�~��q�4�I�k��qI�~j�P�j�����DM��� Z����a5����z(���T��4�6U�q +'�zt����%��i$,��@�,���8��/P��������pN���� ����R�Ci^�,_�+��4�9�a�Q������*��PZ?,�oo�6�9�C�-��-���#�Ci<�L�H�]�����O��Ci�qm���Uf�i�=��,LC�66�4�kDiBuy�Q����
3����0�@�0��3AV �B����u��������Xc V3M�
+��jG�O���tM��-�Yt"8��yn3E��h@:��Q+Qz����P�0?,
��� �4��ilgBb5i��� �E �����r'L#r#L+����T�x�5�ixKj������������
p���4�^j��D;����m�i��i�z5���[?�<����������N�O�F o0-��2�l0� ��t���r ��I�CExv�e�� �03�"6V������N��5�1�p�\++�0���N�@$�E}&��2S���r�Jn=��E�>B��%�H#TSl�O������w�����0�DbF|~�<&H�
�YF6�f��������:� x7���,2�9)� ��� #��h�nX�*�(�&P���KA�:�YV��&. jk��Da�X.��W�A����l"4�������
�q�DcN6�VU�:��$`��o
�t��3��q���L%��m
�?HAH��3��r+�������O]��_���>������
� �);0�ee�2 ;����_� 11�VC�!�T���6,&������A:�Z-���@[q���i%����NK�A��S����jM7s��U�P�f(�1�(m�Aj�V����i��S M�q�4�U,[�U}�%03e`>��k'e�H#:+�
�x6��F��R+[��Xg�u��+`��� m&F�F��e��p�
���6�I����Bi�JBi��4��iBO�4'���3�� m`b <������{�iDBi���]�>@
��HP'���H�vq�[Q�(�k�G��`�/��V@iv�6��?�(.�-��X������'J�l������e�x��8l�����������~�9���o�ww����s��o�ts��1�7�}��S����{��`��bi�%��!�~���6��G�|��kc���uHo����E�������tot�;����++��9�� f������!���_����{�:{� !�����0��=�i�Wzfky��2{������xQe�J_�l����U.��� g3h��~�7���B�9>�~p���������[���c���]���������^�������������K���QQ��
L~�R���N�YG'C��������Oh�G�)YW_�����:�K;��UI���N�g4��9�'^:�ww>���u�7w9�P��'1�������vv��?����k�����[0�hM��������-z�cw�x?~���K�������9\���cL;8��0������/���>�uc>/��O_�4�zV�����Ok�87d5�%��_��,x���%�>@Z�v��V;2>����(��mc]��]�������= ��+��A�@����n��bV?�o���&v�\����D���!��
\zJ/X8�B}����q������Y�z���*�v������V�a�0������s�A
endstream
endobj
3 0 obj
4146
endobj
5 0 obj
<</Length 6 0 R/Filter/FlateDecode>>
stream
x���Ko%����+zm`>�/@���s" � ��6M {���9�}[�^�.o��lOU�k9\�����=��U������Q��?��������O��{���Wuo)��/������e��������*��c��j<h��q������7[�j!��?����b���~����Y�s����V��Zw;[Vl�3����<h��=+w�������s��p���N����l=����6[O������'z������'z�t�;?���|eG_����u?z�_�^+N�������r������>�s��A��Z�����~��j�S�����Z����:���Z����|Z��z����U>z����>���������+��[z�4F���#��z� ����K#��z� ����K#��z�8^���y��;�D�J�i���}�J�����4c����iT<Q�
�{N�\���y�(i����S[���=��
WW����>��4��~�^�V��wn�[[��G�������z�.bu�G���{�9B�6�[i�g�0Z�>/K����s��7�wZ��sl���1;oaV�������WV>c�����G��7�U�*����1�G��#��;�M�F�S�����{#�����weF��y<��6XL������xR�G
���h)���_�7m������s��0��k�E��1|V
��{b� �0��hK����$��t����H9\v����=��N9cV���2|��ZM;63}�����m���~�on��<Q�+%]p�$�'����Q92�����'`Q��,���*��3o����R}vX?�ay�.`v�C��*|@�l�z����"\�����UW�w��@=���z����KI���F{�C�X���*=���
��c��������i5���
�`jp���L5D�;S
��?S
��-xaF�����5��=\�z�����#��Ye��~�4(��,KC��Z�"|���|ah�B�(|�Do ��k����LE��LE���OTD[���L�T��v|m����/3b f"M��2)���&
vV�����Y�@����-��x�;�I�3�*:�N���(W`dW��(��t��3u���b$����-S�%�?c�a��E�SJ�����_����� sF�NOT��/O�u�������TEeL�R�����k�����
�L{�����=������y_j�������f�.�����uQ�5�.:>S��X��Z��"|�Z�E:�t1�#���<UQX=UQ��1kkle
��*�C@$��fYy�~7��q�-��;�����iTEbL4WE�n1+X�H�P���A�P5su���y5���&:5n����|�O��m��Wk]x������-��k��/L��.
kcU���;^�7Z�*C�P��ll��5G*Cq����q��Q�_tc?*#R�Fe^1��z2(8�J>�2
���~*�G
�Q�ZIz�h� V
��
�{��j{�2��D~�
(�U�p��&���&�F�2
�QP�n%���O�+��j�Be4V�E��E)co%���"r,����v��J�P=�i���J�PQ����i���l��TU��A
��qWI��-�b�Ee4�^}]���R�UKUT�z���"R{T�\c������J���Z�?0�=��1
�u+����*�sJGd��S-r��`����'f|���/U�}:[��t��g��.�f�h�E���\��ZC��
|N��}����v�5�m����Z�R�~�q���q��x�
���w�*X}U����|���*_�TP�
�����g��E�LR�zy����<�b.����^����b��
��y���pua�R���3���5�hT��Q�3����z��s]�2�jU����r�4U�P�R��n%�S�*N ��1j���#0�W�#Rm����M+TG���R��'��um�3�:xm���Bu02U�Cz�P������':e����O�<���Q�<����R6���7�[�.�3�����^nS���Q�[;r�����������3hP��z�Au����g#��H��o�F�*����T�.*���UR-�90#UR�\D4`���GU4�f����L���U5��Q����UY��{f�x��2�y��/�����Ac�.�K�t���)c0w7�����)�,����P��=3��x��3^kY/�|C$�t6�hl�� [f�`S�&�V�xh�B���#��Yv,��?
(PFlpx��Z���`Bj(,���4n[H���t��^g��R}���z� O����>�����'�Z�j�,�~�5[��= ���4:�DX� N�/��Z�jy�'L��=D fBkU(h��0��J� �V�����'�6����%��V8\�h��a"x\���k&4(���p�2M�-�D�P56�Z%4��Z;�5�u#X��}kI`�`��3x!2�����A� Oh��RTD����z���{������l�*��xMX��j�C��ZJ����&�TD���Cb^����L��z(��P�g�a!_���kY�Z��"j=�5�(!^s`ELOt�1���F@3x"��,LB�DB�&2��am�k&XBE���,$O(� �Rl�kU���1^�%EANA�&� 7�t�\�<
�����O�������&
���~�������%�W!�Q&�Ot�=��`- RI����X3�*�5;��"�GETg�*��5EkO+|���1�7B��B�NG��@��P-j7Q
iR
�'dX��V[�Lj�����P3
�MP�je�0z
��v�j~�'*3��K�j���jq�8x!�l������J$da����@a3����G���P[nm�k,Y� ��|j�������� ��������`�X:U4z�~\ z�p�����8�4�T��jku��c���� ��W�����
����V�'p��W�j�9M@MY\@�/�P+����m����NK�I�iA�Q��&� ��M@MY�M�������2��Z����.���@-��fg����� ���8m�]8-
jQ�.8�����FT���p�^�
���&�&�)�F��-�&