Increase Vacuum ring buffer.
Good day, every one.
I investigated autovacuum performance, and found that it suffers a lot
from small ring buffer. It suffers in a same way bulk writer suffered
before Tom Lane's commit 6382448cf96:
Tom Lane <tgl@sss.pgh.pa.us> 2009-06-23 00:04:28
For bulk write operations (eg COPY IN), use a ring buffer of 16MB
instead of the 256KB limit originally enforced by a patch committed
2008-11-06. Per recent test results, the smaller size resulted in an
undesirable decrease in bulk data loading speed, due to COPY
processing frequently getting blocked for WAL flushing. This area
might need more tweaking later, but this setting seems to be good
enough for 8.4.
It is especially noticable when database doesn't fit in shared_buffers
but fit into OS file cache, and data is intensively updated (ie OLTP
load). In this scenario autovacuum with current 256kB (32 pages) ring
buffer lasts 3-10 times longer than with increased to 16MB ring buffer.
I've tested with synthetic load with 256MB or 1GB shared buffers and
2-6GB (with indices) tables, with different load factor and with/without
secondary indices on updated columns. Table were randomly updated with
hot and non-hot updates. Times before/after buffer increase (depending
on load) were 7500sec/1200sec, 75000sec/11500sec. So benefit is
consistently reproducible.
I didn't tested cases when database doesn't fit OS file cache. Probably
in this case benefit will be smaller cause more time will be spent in
disk read.
I didn't test intensively OLAP load. I've seen once that increased
buffer slows a bit scanning almost immutable huge table, perhaps cause
of decreased CPU cache locality. But given such scan is already fast,
and autovacuum of "almost immutable table" runs rarely, I don't think
it is very important.
Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0001-Set-vacuum-ring-buffer-16MB.patchtext/x-diff; name=0001-Set-vacuum-ring-buffer-16MB.patchDownload
From 4ee9e83e915e42de57061b29b1b7adfeec89f531 Mon Sep 17 00:00:00 2001
From: Sokolov Yura <funny.falcon@postgrespro.ru>
Date: Tue, 18 Jul 2017 12:33:33 +0300
Subject: [PATCH] Set vacuum ring buffer 16MB
Vacuum suffers a lot from small ring buffer in a way bulk writer
suffered before Tom Lane's fix at 6382448cf96:
> the smaller size resulted in an undesirable decrease in bulk data
> loading speed, due to COPY processing frequently getting blocked
> for WAL flushing.
---
src/backend/storage/buffer/freelist.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c
index 9d8ae6ae8e..4f12ff9f77 100644
--- a/src/backend/storage/buffer/freelist.c
+++ b/src/backend/storage/buffer/freelist.c
@@ -546,7 +546,7 @@ GetAccessStrategy(BufferAccessStrategyType btype)
ring_size = 16 * 1024 * 1024 / BLCKSZ;
break;
case BAS_VACUUM:
- ring_size = 256 * 1024 / BLCKSZ;
+ ring_size = 16 * 1024 * 1024 / BLCKSZ;
break;
default:
--
2.11.0
On Tue, Jul 18, 2017 at 6:09 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
I investigated autovacuum performance, and found that it suffers a lot
from small ring buffer. It suffers in a same way bulk writer suffered
before Tom Lane's commit 6382448cf96:Tom Lane <tgl@sss.pgh.pa.us> 2009-06-23 00:04:28
For bulk write operations (eg COPY IN), use a ring buffer of 16MB
instead of the 256KB limit originally enforced by a patch committed
2008-11-06. Per recent test results, the smaller size resulted in an
undesirable decrease in bulk data loading speed, due to COPY
processing frequently getting blocked for WAL flushing. This area
might need more tweaking later, but this setting seems to be good
enough for 8.4.It is especially noticable when database doesn't fit in shared_buffers
but fit into OS file cache, and data is intensively updated (ie OLTP
load). In this scenario autovacuum with current 256kB (32 pages) ring
buffer lasts 3-10 times longer than with increased to 16MB ring buffer.I've tested with synthetic load with 256MB or 1GB shared buffers and
2-6GB (with indices) tables, with different load factor and with/without
secondary indices on updated columns. Table were randomly updated with
hot and non-hot updates. Times before/after buffer increase (depending
on load) were 7500sec/1200sec, 75000sec/11500sec. So benefit is
consistently reproducible.I didn't tested cases when database doesn't fit OS file cache. Probably
in this case benefit will be smaller cause more time will be spent in
disk read.
I didn't test intensively OLAP load. I've seen once that increased
buffer slows a bit scanning almost immutable huge table, perhaps cause
of decreased CPU cache locality. But given such scan is already fast,
and autovacuum of "almost immutable table" runs rarely, I don't think
it is very important.Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.
I think the question for this patch is "so, why didn't we do it this
way originally?".
It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jul 20, 2017 at 11:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Jul 18, 2017 at 6:09 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:I investigated autovacuum performance, and found that it suffers a lot
from small ring buffer. It suffers in a same way bulk writer suffered
before Tom Lane's commit 6382448cf96:Tom Lane <tgl@sss.pgh.pa.us> 2009-06-23 00:04:28
For bulk write operations (eg COPY IN), use a ring buffer of 16MB
instead of the 256KB limit originally enforced by a patch committed
2008-11-06. Per recent test results, the smaller size resulted in an
undesirable decrease in bulk data loading speed, due to COPY
processing frequently getting blocked for WAL flushing. This area
might need more tweaking later, but this setting seems to be good
enough for 8.4.It is especially noticable when database doesn't fit in shared_buffers
but fit into OS file cache, and data is intensively updated (ie OLTP
load). In this scenario autovacuum with current 256kB (32 pages) ring
buffer lasts 3-10 times longer than with increased to 16MB ring buffer.I've tested with synthetic load with 256MB or 1GB shared buffers and
2-6GB (with indices) tables, with different load factor and with/without
secondary indices on updated columns. Table were randomly updated with
hot and non-hot updates. Times before/after buffer increase (depending
on load) were 7500sec/1200sec, 75000sec/11500sec. So benefit is
consistently reproducible.I didn't tested cases when database doesn't fit OS file cache. Probably
in this case benefit will be smaller cause more time will be spent in
disk read.
I didn't test intensively OLAP load. I've seen once that increased
buffer slows a bit scanning almost immutable huge table, perhaps cause
of decreased CPU cache locality. But given such scan is already fast,
and autovacuum of "almost immutable table" runs rarely, I don't think
it is very important.Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.I think the question for this patch is "so, why didn't we do it this
way originally?".It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.
IIRC, originally, the default shared_buffers settings was tiny.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jul 20, 2017 at 11:09 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.IIRC, originally, the default shared_buffers settings was tiny.
It is true that we increased the default shared_buffers value from
32MB to 128MB in f358428280953643313ee7756e0a8b8ccfde7660, but it's
also true ring buffers are capped at 1/8th of shared_buffers
regardless of anything else, so I don't think that's the explanation
here. Even if that weren't the case, how would a 4x increase in the
default size of shared_buffers (which is probably the most-commonly
changed GUC of any that PostgreSQL has) justify a 64x increase in the
size of the ring buffer?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Claudio Freire <klaussfreire@gmail.com> writes:
On Thu, Jul 20, 2017 at 11:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:
I think the question for this patch is "so, why didn't we do it this
way originally?".It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.
IIRC, originally, the default shared_buffers settings was tiny.
At the time we set the ring buffer size to 256K, the maximum
shared_buffers that initdb would configure was 32MB; and you often didn't
get that much due to SHMMAX. Now of course it's 128MB, and you'll pretty
much always get that. So there's certainly room to argue that it's time
to increase vacuum's ring buffer size, but that line of argument doesn't
justify more than ~10X increase at most.
Like Robert, I'm afraid of changing this number in a vacuum (ahem).
If you've got the default number of autovacuum workers going (3), you'd
have them thrashing a total of 3/8ths of shared memory by default, which
seems like a lot. We do need to look at the impact on foreground
processing, and not just at the speed of vacuum itself.
One idea for addressing this would be to raise the max values in the
switch, but tighten the fraction-of-shared-buffers limit just below.
I wouldn't have any objection to a 16MB ring buffer for vacuum when
it is coming out of a 1GB arena ... it just seems like a rather large
fraction of 128MB to give to a background process, especially to each
of several background processes.
Maybe the fraction-of-shared-buffers shouldn't be one size fits all,
but a different limit for each case?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jul 20, 2017 at 12:51 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Jul 20, 2017 at 11:09 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.IIRC, originally, the default shared_buffers settings was tiny.
It is true that we increased the default shared_buffers value from
32MB to 128MB in f358428280953643313ee7756e0a8b8ccfde7660, but it's
also true ring buffers are capped at 1/8th of shared_buffers
regardless of anything else, so I don't think that's the explanation
here. Even if that weren't the case, how would a 4x increase in the
default size of shared_buffers (which is probably the most-commonly
changed GUC of any that PostgreSQL has) justify a 64x increase in the
size of the ring buffer?
I'm theorizing here, because I've not been involved in any of those
decisions. But I have been stracing and checking on vacuum quite
frequently lately, so my 2 cents:
The 4x increase in shared_buffers acknowledges increases in available
host memory over the years. It's not just about how much of
shared_buffers is dedicated to the ring buffer, but also whether we
can reasonably expect the whole ring to remain in the OS cache while
it's getting dirtied.
Vacuum will almost always dirty pages once and never again, and
flushing dirty pages back to the OS cache ASAP helps avoid a
read-modify-write cycle if the page didn't leave the OS cache. That's
more likely to happen with smaller rings than with bigger rings. But
as memory increases, the ring can be made bigger without fear of it
falling from the OS cache prematurely.
So, the 64x increase may be justifiable in absolute terms: it's not
unlikely that a 16MB buffer will be evicted from the OS cache before
vacuum is done with it, even in heavily throttled vacuums. Memory
pressure on the host would have to be insane to cause that, in modern
systems with GBs of RAM. That might not have been true in the early
days.
Now, whether autovacuum would suck up a big portion of the
shared_buffers or not, is another matter. Perhaps the ring buffer
could tune itself to whatever limit seems comfortable in that regard,
the way it does with other GUCs (like cost_limit): divide it among the
number of workers?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-20 17:59, Robert Haas wrote:
On Tue, Jul 18, 2017 at 6:09 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:I investigated autovacuum performance, and found that it suffers a lot
from small ring buffer. It suffers in a same way bulk writer suffered
before Tom Lane's commit 6382448cf96:Tom Lane <tgl@sss.pgh.pa.us> 2009-06-23 00:04:28
For bulk write operations (eg COPY IN), use a ring buffer of 16MB
instead of the 256KB limit originally enforced by a patch committed
2008-11-06. Per recent test results, the smaller size resulted in an
undesirable decrease in bulk data loading speed, due to COPY
processing frequently getting blocked for WAL flushing. This area
might need more tweaking later, but this setting seems to be good
enough for 8.4.It is especially noticable when database doesn't fit in shared_buffers
but fit into OS file cache, and data is intensively updated (ie OLTP
load). In this scenario autovacuum with current 256kB (32 pages) ring
buffer lasts 3-10 times longer than with increased to 16MB ring
buffer.I've tested with synthetic load with 256MB or 1GB shared buffers and
2-6GB (with indices) tables, with different load factor and
with/without
secondary indices on updated columns. Table were randomly updated with
hot and non-hot updates. Times before/after buffer increase (depending
on load) were 7500sec/1200sec, 75000sec/11500sec. So benefit is
consistently reproducible.I didn't tested cases when database doesn't fit OS file cache.
Probably
in this case benefit will be smaller cause more time will be spent in
disk read.
I didn't test intensively OLAP load. I've seen once that increased
buffer slows a bit scanning almost immutable huge table, perhaps cause
of decreased CPU cache locality. But given such scan is already fast,
and autovacuum of "almost immutable table" runs rarely, I don't think
it is very important.Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.I think the question for this patch is "so, why didn't we do it this
way originally?".It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Initially, ring buffer were introduced for sequential scan.
It was added for vacuum "for a company", and before introducing
vacuum used just 1 page, so giving 32 pages to was huge improvement:
d526575f893c1a4e05ebd Tom Lane <tgl@sss.pgh.pa.us> 2007-05-31 00:12:03
"Make large sequential scans and VACUUMs work in a limited-size "ring"
of
buffers, rather than blowing out the whole shared-buffer arena. Aside
from
avoiding cache spoliation, this fixes the problem that VACUUM formerly
tended
to cause a WAL flush for every page it modified, because we had it
hacked to
use only a single buffer."
Later ring buffer were added for bulk writer, but the same 32 pages:
85e2cedf985bfecaf43a18ca Tom Lane <tgl@sss.pgh.pa.us> 2008-11-06
23:51:15
" Improve bulk-insert performance by keeping the current target buffer
pinned
(but not locked, as that would risk deadlocks). Also, make it work in
a small
ring of buffers to avoid having bulk inserts trash the whole buffer
arena.
Robert Haas, after an idea of Simon Riggs'."
And finally after some real world usage buffer for bulk writer were
increased
6382448cf96a9b88 Tom Lane <tgl@sss.pgh.pa.us> 2009-06-23 00:04:28
" For bulk write operations (eg COPY IN), use a ring buffer of 16MB
instead
of the 256KB limit originally enforced by a patch committed
2008-11-06.
Per recent test results, the smaller size resulted in an undesirable
decrease
in bulk data loading speed, due to COPY processing frequently getting
blocked
for WAL flushing. This area might need more tweaking later, but this
setting
seems to be good enough for 8.4."
So, from my point of view, no one just evaluate performance of increased
ring buffer for vacuum.
It were discussed year age:
/messages/by-id/CA+TgmobmP=KE-z5f7-CegXMFGRbV=hC+=Fxb2mbhpfD-ZD=-bA@mail.gmail.com
There was your, Robert wrong assumption:
But all that does is force the backend to write to the operating
system, which is where the real buffering happens.
But in fact, vacuum process performs FSYNC! It happens, cause vacuum
evicts dirty pages from its ring buffer. And to evict dirty page, it
has to be sure WAL record about its modification is FSYNC-ed to WAL.
Because ring buffer is damn small, vacuum almost always have to perform
FSYNC by itself and have to do it very frequently (cause ring is damn
small).
With greater ring buffer, there is greater chance that fsync were
performed by wal_writer process, or other backend. And even if
vacuum does fsync by itself, it syncs records about more modified
pages from its ring, so evicting next page is free.
Also it were discussed in 2012-2013
/messages/by-id/CA+TgmoZ3BOips7ot0tnSPO0yhKB3RUShDFoiYruoYXZDPr=ptQ@mail.gmail.com
No decision were made, unfortunately.
If some fears that increasing vacuum ring buffer will lead to
decreasing transaction performance, then why it were not exhaustively
tested?
I had no evidence that transactions suffers from autovacuum improved
in this way. Perhaps I tested not precisely.
People, lets collect more results! Please test this one line change
with load of your choice and share the results!
If improving autovacuum doesn't hurt performance much, then why we
should live with databases bloated?
If such improvement really hurts performance, then it should be
documented by letters in pgsql-hackers, and comment should be put
into src/backend/storage/buffer/freelist.c .
And possible community will decide, that it should be GUC variable:
- if one prefers to keep its database unbloated, he could increase
vacuum ring buffer,
- otherwise just left it in "safe-but-slow" default.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jul 20, 2017 at 1:08 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
So, the 64x increase may be justifiable in absolute terms: it's not
unlikely that a 16MB buffer will be evicted from the OS cache before
vacuum is done with it, even in heavily throttled vacuums.
Sorry, that should read "It's not *likely* that a 16MB buffer will be evicted"
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-20 19:04, Tom Lane wrote:
Claudio Freire <klaussfreire@gmail.com> writes:
On Thu, Jul 20, 2017 at 11:59 AM, Robert Haas <robertmhaas@gmail.com>
wrote:I think the question for this patch is "so, why didn't we do it this
way originally?".It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood
that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse. I'm not certain what the right
decision is here, but I think that a careful analysis of possible
downsides is needed.IIRC, originally, the default shared_buffers settings was tiny.
At the time we set the ring buffer size to 256K, the maximum
shared_buffers that initdb would configure was 32MB; and you often
didn't
get that much due to SHMMAX. Now of course it's 128MB, and you'll
pretty
much always get that. So there's certainly room to argue that it's
time
to increase vacuum's ring buffer size, but that line of argument
doesn't
justify more than ~10X increase at most.Like Robert, I'm afraid of changing this number in a vacuum (ahem).
If you've got the default number of autovacuum workers going (3), you'd
have them thrashing a total of 3/8ths of shared memory by default,
which
seems like a lot. We do need to look at the impact on foreground
processing, and not just at the speed of vacuum itself.One idea for addressing this would be to raise the max values in the
switch, but tighten the fraction-of-shared-buffers limit just below.
I wouldn't have any objection to a 16MB ring buffer for vacuum when
it is coming out of a 1GB arena ... it just seems like a rather large
fraction of 128MB to give to a background process, especially to each
of several background processes.Maybe the fraction-of-shared-buffers shouldn't be one size fits all,
but a different limit for each case?regards, tom lane
It could be 'min(shared_buffers/8/autovacuum_workers, 16MB)'.
It quite rarely people live shared_buffers as 128MB, and those people
don't really care about other settings. So 5MB will be enough for
their autovacuum's ring buffer.
People, who care about tuning its postgresql, will increase their
shared_buffers, and autovacuum will have its 16MB ring buffer.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jul 20, 2017 at 7:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:
Initially I wanted to make BAS_BULKWRITE and BAS_VACUUM ring sizes
configurable, but after testing I don't see much gain from increasing
ring buffer above 16MB. So I propose just 1 line change.I think the question for this patch is "so, why didn't we do it this
way originally?".It's no secret that making the ring buffer larger will improve
performance -- in fact, not having a ring buffer at all would improve
performance even more. But it would also increase the likelihood that
the background work of vacuum would impact the performance of
foreground operations, which is already a pretty serious problem that
we probably don't want to make worse.
But having a very fast sequence of fdatasync calls is not terribly friendly
to the performance of the foreground operations, either.
I think the reason we didn't do it this way originally is tied the same
reason that autovacuum_vacuum_cost_delay = 20ms by default. If we want it
to be heavily throttled, there isn't much point in using a larger ring
buffer. It is just wasted space. Maybe we could have it start out at
BAS_VACUUM's default size, then grow by one buffer every time it had to
issue a WAL sync, until it reached BAS_BULKWRITE's size where it would max
out.
Cheers,
Jeff
On Thu, Jul 20, 2017 at 12:16 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
So, from my point of view, no one just evaluate performance of increased
ring buffer for vacuum.
I think that argument is clearly incorrect. In commit
6382448cf96a9b88d418cbaf86027b63f465b5d8, which you cited, Tom even
added a note in the README file about why he didn't increase the value
for vacuum also. He knew it would have increased performance had he
also done it for BAS_BULKWRITE, and I knew it too, but it wasn't clear
that it was a good idea, and it's still not.
But in fact, vacuum process performs FSYNC! It happens, cause vacuum
evicts dirty pages from its ring buffer. And to evict dirty page, it
has to be sure WAL record about its modification is FSYNC-ed to WAL.
Because ring buffer is damn small, vacuum almost always have to perform
FSYNC by itself and have to do it very frequently (cause ring is damn
small).With greater ring buffer, there is greater chance that fsync were
performed by wal_writer process, or other backend. And even if
vacuum does fsync by itself, it syncs records about more modified
pages from its ring, so evicting next page is free.
OK, but I have helped *many* customers whose problem was that vacuum
ran too fast and blew data out of the OS cache causing query response
times to go through the roof. That's a common problem. Making VACUUM
run faster will presumably make it more common. I've also run into
many customers whose problem that vacuum ran too slowly, and generally
raising vacuum_cost_limit fixes that problem just fine. So I don't
think it's nearly as clear as you do that making VACUUM run faster is
desirable.
If some fears that increasing vacuum ring buffer will lead to
decreasing transaction performance, then why it were not exhaustively
tested?
If you want something changed, it's your job to do that testing.
Asking why nobody else tested the effects of changing the thing you
want changed is like asking why nobody else wrote the patch you want
written.
And possible community will decide, that it should be GUC variable:
- if one prefers to keep its database unbloated, he could increase
vacuum ring buffer,
- otherwise just left it in "safe-but-slow" default.
That's a possible outcome, but I don't think this discussion is really
going anywhere unless you are willing to admit that increasing VACUUM
performance could have some downsides. If you're not willing to admit
that, there's not a lot to talk about here.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert,
* Robert Haas (robertmhaas@gmail.com) wrote:
On Thu, Jul 20, 2017 at 12:16 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:But in fact, vacuum process performs FSYNC! It happens, cause vacuum
evicts dirty pages from its ring buffer. And to evict dirty page, it
has to be sure WAL record about its modification is FSYNC-ed to WAL.
Because ring buffer is damn small, vacuum almost always have to perform
FSYNC by itself and have to do it very frequently (cause ring is damn
small).With greater ring buffer, there is greater chance that fsync were
performed by wal_writer process, or other backend. And even if
vacuum does fsync by itself, it syncs records about more modified
pages from its ring, so evicting next page is free.OK, but I have helped *many* customers whose problem was that vacuum
ran too fast and blew data out of the OS cache causing query response
times to go through the roof. That's a common problem. Making VACUUM
run faster will presumably make it more common. I've also run into
many customers whose problem that vacuum ran too slowly, and generally
raising vacuum_cost_limit fixes that problem just fine. So I don't
think it's nearly as clear as you do that making VACUUM run faster is
desirable.
I agree that it's a common problem for VACUUM to go too fast, or for
VACUUM to go too slow, but that's really what the vacuum_cost_limit
mechanism is for.
I can see an argument that existing tuned systems which have been
expecting the small ring-buffer to help slow down VACUUM may have to be
adjusted to handle a change, though I would think that other changes
we've made might also require changes to vacuum costing, so I'm not sure
that this is particularly different in that regard.
What I don't agree with is holding off on improving VACUUM in the case
where cost delay is set to zero because we think people might be
depending on it only going so fast in that case. If the cost delay is
set to zero, then VACUUM really should be going as fast as it can and we
should welcome improvments in that area in much the same way that we
welcome performance improvements in sorting and other backend
algorithms.
If some fears that increasing vacuum ring buffer will lead to
decreasing transaction performance, then why it were not exhaustively
tested?If you want something changed, it's your job to do that testing.
Asking why nobody else tested the effects of changing the thing you
want changed is like asking why nobody else wrote the patch you want
written.
I do agree with this. Asking for others to also test is fine, but it's
the patch submitter who needs to ensure that said testing actually
happens and that results are provided to -hackers to support the change.
In particular, multiple different scenarios (DB all in shared_buffers,
DB all in OS cache, DB not able to fit in memory at all, etc) should be
tested.
And possible community will decide, that it should be GUC variable:
- if one prefers to keep its database unbloated, he could increase
vacuum ring buffer,
- otherwise just left it in "safe-but-slow" default.That's a possible outcome, but I don't think this discussion is really
going anywhere unless you are willing to admit that increasing VACUUM
performance could have some downsides. If you're not willing to admit
that, there's not a lot to talk about here.
I'd rather we encourage people to use the existing knobs for tuning
VACUUM speed rather than adding another one that ends up being actually
only a proxy for speed. If there's a memory utilization concern here,
then having a knob for that might make sense, but it sounds like the
concern here is more about the speed and less about coming up with a
reasonable way to scale the size of the ring buffer.
Of course, I'm all for coming up with a good way to size the ring
buffer, and providing a knob if we aren't able to do so, I just don't
want to add unnecessary knobs if we don't need them.
Thanks!
Stephen
On Thu, Jul 20, 2017 at 3:04 PM, Stephen Frost <sfrost@snowman.net> wrote:
I agree that it's a common problem for VACUUM to go too fast, or for
VACUUM to go too slow, but that's really what the vacuum_cost_limit
mechanism is for.
I think that's a valid point. There are also other concerns here -
e.g. whether instead of adopting the patch as proposed we ought to (a)
use some smaller size, or (b) keep the size as-is but reduce the
maximum fraction of shared_buffers that can be consumed, or (c) divide
the ring buffer size through by autovacuum_max_workers. Personally,
of those approaches, I favor (b). I think a 16MB ring buffer is
probably just fine if you've got 8GB of shared_buffers but I'm
skeptical about it when you've got 128MB of shared_buffers.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
* Robert Haas (robertmhaas@gmail.com) wrote:
On Thu, Jul 20, 2017 at 3:04 PM, Stephen Frost <sfrost@snowman.net> wrote:
I agree that it's a common problem for VACUUM to go too fast, or for
VACUUM to go too slow, but that's really what the vacuum_cost_limit
mechanism is for.I think that's a valid point. There are also other concerns here -
e.g. whether instead of adopting the patch as proposed we ought to (a)
use some smaller size, or (b) keep the size as-is but reduce the
maximum fraction of shared_buffers that can be consumed, or (c) divide
the ring buffer size through by autovacuum_max_workers. Personally,
of those approaches, I favor (b). I think a 16MB ring buffer is
probably just fine if you've got 8GB of shared_buffers but I'm
skeptical about it when you've got 128MB of shared_buffers.
Right, agreed on that and that (b) looks to be a good option there.
Thanks!
Stephen
Robert Haas <robertmhaas@gmail.com> writes:
I think that's a valid point. There are also other concerns here -
e.g. whether instead of adopting the patch as proposed we ought to (a)
use some smaller size, or (b) keep the size as-is but reduce the
maximum fraction of shared_buffers that can be consumed, or (c) divide
the ring buffer size through by autovacuum_max_workers. Personally,
of those approaches, I favor (b). I think a 16MB ring buffer is
probably just fine if you've got 8GB of shared_buffers but I'm
skeptical about it when you've got 128MB of shared_buffers.
WFM. I agree with *not* dividing the basic ring buffer size by
autovacuum_max_workers. If you have allocated more AV workers, I think
you expect AV to go faster, not for the workers to start fighting among
themselves.
It might, however, be reasonable for the fraction-of-shared-buffers
limitation to have something to do with autovacuum_max_workers, so that
you can't squeeze yourself out of shared_buffers if you set that number
really high. IOW, I think the upthread suggestion of
min(shared_buffers/8/autovacuum_workers, 16MB) is basically the right
idea, though we could debate the exact constants.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-20 20:59, Robert Haas wrote:
If you want something changed, it's your job to do that testing.
I've been testing for two weeks before I wrote to pgsql-hackers. And I
wrote some highlevel results in first letter.
I haven't noticed transactions slowdown from increased vacuum ring
buffer.
But:
- my workload could be too synthetic,
- I've measured total performed transactions, and sampled time of random
ones.
So probably my measurement were not exhaustive. And definitely I could
not imagine whole set of interesting workloads. And given every test
run for at least 5 hours (and, in fact, test run on master for 20 hours,
cause autovacuum doesn't finish faster on that synthetic workload)
I will spend at least half-year if I test all possible workloads.
That is why I asked community to test it on workloads people consider
interesting.
I may measure by my self, if some tells me what workload he wants to be
tested.
Two previous discussions of the topic were killed without any evidence
of
testing at all, only with theoretical doubts. Is it fair? Why "probably
it
is bad" is better than "probably it is good"?
You are one of leadership. I know it is not your job to test every tiny
change a school boy proposed. But here is a lot of people, who waits for
your word. Instead of cooling rush and closing discussions, you may just
say: "please, someone test it with that particular workload".
I don't think this discussion is really going anywhere unless you are
willing to admit that increasing VACUUM performance could have some
downsides. If you're not willing to admit that, there's not a lot to
talk
about here.
I can admit many things. I've seen how autovacuum drops pgbench
performance
from 100000tps down to 1500tps cause of contention on CLogControlLock.
(btw
my LWLock patch improves it to 3000tps).
But that is not a reason autovacuum should be intentionally slow. How
Stephen Frost said, that is what vacuum_cost_limit and vacuum_cost_delay
are
for. (and, certainly, it is reason to improve CLog and SLRU).
OK, but I have helped *many* customers whose problem was that vacuum
ran too fast and blew data out of the OS cache causing query response
times to go through the roof.
When there is no garbage, increasing autovacuum ring buffer changes
almost
nothing. When there is garbage, current small ring buffer leads to a
storm
of fsyncs. Frequent fsyncs slows down hdd a lot, and then hdd isn't
capable
to satisfy queries and refill OS cache. Will you admit it?
I've also run into many customers whose problem that vacuum ran too
slowly, and generally raising vacuum_cost_limit fixes that problem just
fine.
Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-20 22:51, Tom Lane wrote:
Robert Haas <robertmhaas@gmail.com> writes:
I think that's a valid point. There are also other concerns here -
e.g. whether instead of adopting the patch as proposed we ought to (a)
use some smaller size, or (b) keep the size as-is but reduce the
maximum fraction of shared_buffers that can be consumed, or (c) divide
the ring buffer size through by autovacuum_max_workers. Personally,
of those approaches, I favor (b). I think a 16MB ring buffer is
probably just fine if you've got 8GB of shared_buffers but I'm
skeptical about it when you've got 128MB of shared_buffers.WFM. I agree with *not* dividing the basic ring buffer size by
autovacuum_max_workers. If you have allocated more AV workers, I think
you expect AV to go faster, not for the workers to start fighting among
themselves.It might, however, be reasonable for the fraction-of-shared-buffers
limitation to have something to do with autovacuum_max_workers, so that
you can't squeeze yourself out of shared_buffers if you set that number
really high. IOW, I think the upthread suggestion of
min(shared_buffers/8/autovacuum_workers, 16MB) is basically the right
idea, though we could debate the exact constants.regards, tom lane
Attached version is with min(shared_buffers/8/autovacuum_workers, 16MB).
With regards
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0001-Set-total-vacuum-ring-buffer-16MB-v2.patchtext/x-diff; name=0001-Set-total-vacuum-ring-buffer-16MB-v2.patchDownload
From 8ebd5e7eb498fdc75fc7b724ace1f6de8fbcf3fd Mon Sep 17 00:00:00 2001
From: Sokolov Yura <funny.falcon@postgrespro.ru>
Date: Tue, 18 Jul 2017 12:33:33 +0300
Subject: [PATCH] Set total vacuum ring buffer 16MB
Vacuum suffers a lot from small ring buffer in a way bulk writer
suffered before Tom Lane's fix at 6382448cf96:
> the smaller size resulted in an undesirable decrease in bulk data
> loading speed, due to COPY processing frequently getting blocked
> for WAL flushing.
During discussion were decided to set it to
min(shared_buffers/8/autovacuum_max_workers, 16MB), so that many autovacuum
workers will not consume significant part of shared buffers.
---
src/backend/storage/buffer/freelist.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c
index 9d8ae6ae8e..da83cd155b 100644
--- a/src/backend/storage/buffer/freelist.c
+++ b/src/backend/storage/buffer/freelist.c
@@ -16,6 +16,7 @@
#include "postgres.h"
#include "port/atomics.h"
+#include "postmaster/autovacuum.h"
#include "storage/buf_internals.h"
#include "storage/bufmgr.h"
#include "storage/proc.h"
@@ -526,6 +527,7 @@ GetAccessStrategy(BufferAccessStrategyType btype)
{
BufferAccessStrategy strategy;
int ring_size;
+ int n;
/*
* Select ring size to use. See buffer/README for rationales.
@@ -541,12 +543,15 @@ GetAccessStrategy(BufferAccessStrategyType btype)
case BAS_BULKREAD:
ring_size = 256 * 1024 / BLCKSZ;
+ n = 1;
break;
case BAS_BULKWRITE:
ring_size = 16 * 1024 * 1024 / BLCKSZ;
+ n = 1;
break;
case BAS_VACUUM:
- ring_size = 256 * 1024 / BLCKSZ;
+ ring_size = 16 * 1024 * 1024 / BLCKSZ;
+ n = autovacuum_max_workers;
break;
default:
@@ -556,7 +561,7 @@ GetAccessStrategy(BufferAccessStrategyType btype)
}
/* Make sure ring isn't an undue fraction of shared buffers */
- ring_size = Min(NBuffers / 8, ring_size);
+ ring_size = Min(NBuffers / 8 / n, ring_size);
/* Allocate the object and initialize all elements to zeroes */
strategy = (BufferAccessStrategy)
--
2.11.0
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
You are one of leadership. I know it is not your job to test every tiny
change a school boy proposed. But here is a lot of people, who waits for
your word. Instead of cooling rush and closing discussions, you may just
say: "please, someone test it with that particular workload".
I had no intention of cooling rush and closing discussions. I was
trying to help you understand what points you needed to address in
order to have a chance of getting this committed. I feel like I came
into this discussion to try to help you make some progress on this
issue, and instead of appreciating that, you're making me the bad guy.
When there is no garbage, increasing autovacuum ring buffer changes almost
nothing. When there is garbage, current small ring buffer leads to a storm
of fsyncs. Frequent fsyncs slows down hdd a lot, and then hdd isn't capable
to satisfy queries and refill OS cache. Will you admit it?
I haven't tested it, but it sounds believable.
I've also run into many customers whose problem that vacuum ran too
slowly, and generally raising vacuum_cost_limit fixes that problem just
fine.Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?
No, I definitely won't admit that. With default settings autovacuum
won't write more than ~2.3MB/s if I remember the math correctly, so if
you've got a 1TB table you're probably going to need a bigger value.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?No, I definitely won't admit that. With default settings autovacuum
won't write more than ~2.3MB/s if I remember the math correctly, so if
you've got a 1TB table you're probably going to need a bigger value.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
I've seed autovacuum process spending >50% of its time in fsync
(with current ring buffer) (but I used autovacuum_cost_delay=2ms).
fsync could lasts up to second on hdd if there is concurrent IO.
Even on ssd fsync could be really noticeable.
But, I agree that for 1TB table autovacuum_cost_limit still should
be increased, even with larger ring buffer.
My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.
Then I looked to strace of autovacuum process, and noticed storm
of fsync. I catched backtraces with gdb rooting on fsync, and
found that evicting dirty pages from small ring buffer it the
reason.
After some experiments with combining my "early fsm update" and
size of ring buffer, I understood that increasing ring buffer
gives most of benefits: autovacuum runs faster, and bloating is
greatly reduced. On extreme case, 400mb table bloats to 17GB
on master, and only to 5GB with faster autovacuum.
I used custom scripts, and that is why my statistic is not full.
Though, I didn't found performance reduction. In fact, it looks
like tests with "larger autovacuum ring" did more queries per hour
than tests against master.
I will run pgbench for weekend, so latencies and percentiles
will be collected.
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.
This FSM thing is probably not a bad idea as well.
We're forced to run regular manual vacuums because for some tables
autovacuums seems to never be enough, no matter how it's configured,
mostly because it gets canceled all the time. These are high-churn,
huge tables, so vacuuming them takes hours or days, there's always
someone with a conflicting lock at some point that ends up canceling
the autovacuum task.
The above paragraph triggered me to go check, and it seems in those
cases the FSM never gets vacuumed. That's probably not a good thing,
but I don't see how to vacuum the FSM after a cancel. So vacuuming the
FSM from time to time during long-running vacuums seems like a good
idea at this point.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.This FSM thing is probably not a bad idea as well.
We're forced to run regular manual vacuums because for some tables
autovacuums seems to never be enough, no matter how it's configured,
mostly because it gets canceled all the time. These are high-churn,
huge tables, so vacuuming them takes hours or days, there's always
someone with a conflicting lock at some point that ends up canceling
the autovacuum task.The above paragraph triggered me to go check, and it seems in those
cases the FSM never gets vacuumed. That's probably not a good thing,
but I don't see how to vacuum the FSM after a cancel. So vacuuming the
FSM from time to time during long-running vacuums seems like a good
idea at this point.
Attached patch changes fsm update: instead of updating only lowest
level, it propagates space increase up to root.
It slows autovacuum a bit, so that I didn't propose it together with
ring buffer increase.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
0001-fsm-vacuum-write-increasing-of-free-space-on-upper-l.patchtext/x-diff; name=0001-fsm-vacuum-write-increasing-of-free-space-on-upper-l.patchDownload
From 60f76fc83ee8752362e037c1e19ed089d861e026 Mon Sep 17 00:00:00 2001
From: Sokolov Yura <funny.falcon@postgrespro.ru>
Date: Mon, 3 Jul 2017 15:14:07 +0300
Subject: [PATCH] fsm&vacuum: write increasing of free space on upper levels
Every RecordPageWithFreeSpace update upper levels, if amount of free
spaces increased.
Also, do FreeSpaceMapVacuum after scanning heap and before vacuuming
indices.
---
src/backend/commands/vacuumlazy.c | 16 +++++-----
src/backend/storage/freespace/freespace.c | 49 ++++++++++++++++++++++++++-----
src/backend/storage/freespace/fsmpage.c | 4 ++-
src/include/storage/fsm_internals.h | 2 +-
4 files changed, 53 insertions(+), 18 deletions(-)
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index fc9c4f0fb1..a7fff0c5ae 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -595,7 +595,6 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
maxoff;
bool tupgone,
hastup;
- int prev_dead_count;
int nfrozen;
Size freespace;
bool all_visible_according_to_vm = false;
@@ -925,7 +924,6 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
has_dead_tuples = false;
nfrozen = 0;
hastup = false;
- prev_dead_count = vacrelstats->num_dead_tuples;
maxoff = PageGetMaxOffsetNumber(page);
/*
@@ -1245,16 +1243,16 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
vacrelstats->nonempty_pages = blkno + 1;
/*
- * If we remembered any tuples for deletion, then the page will be
- * visited again by lazy_vacuum_heap, which will compute and record
- * its post-compaction free space. If not, then we're done with this
- * page, so remember its free space as-is. (This path will always be
- * taken if there are no indexes.)
+ * heap_page_prune could free a bit of space. Lets record it
+ * immediatly despite it will by recorded again in lazy_vacuum_heap
+ * after more compaction.
*/
- if (vacrelstats->num_dead_tuples == prev_dead_count)
- RecordPageWithFreeSpace(onerel, blkno, freespace);
+ RecordPageWithFreeSpace(onerel, blkno, freespace);
}
+ /* fix up all tiny bits of freed space before vacuuming indices */
+ FreeSpaceMapVacuum(onerel);
+
/* report that everything is scanned and vacuumed */
pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);
diff --git a/src/backend/storage/freespace/freespace.c b/src/backend/storage/freespace/freespace.c
index 4648473523..ca0c356f28 100644
--- a/src/backend/storage/freespace/freespace.c
+++ b/src/backend/storage/freespace/freespace.c
@@ -107,6 +107,8 @@ static Size fsm_space_cat_to_avail(uint8 cat);
/* workhorse functions for various operations */
static int fsm_set_and_search(Relation rel, FSMAddress addr, uint16 slot,
uint8 newValue, uint8 minValue);
+static void fsm_set_recursive(Relation rel, FSMAddress addr, uint16 slot,
+ uint8 new_cat, bool only_increase);
static BlockNumber fsm_search(Relation rel, uint8 min_cat);
static uint8 fsm_vacuum_page(Relation rel, FSMAddress addr, bool *eof);
static BlockNumber fsm_get_lastblckno(Relation rel, FSMAddress addr);
@@ -173,9 +175,8 @@ RecordAndGetPageWithFreeSpace(Relation rel, BlockNumber oldPage,
/*
* RecordPageWithFreeSpace - update info about a page.
*
- * Note that if the new spaceAvail value is higher than the old value stored
- * in the FSM, the space might not become visible to searchers until the next
- * FreeSpaceMapVacuum call, which updates the upper level pages.
+ * It tries to updates upper levels immediatly, if new spaceAvail is higher
+ * than the old value stored in the FSM.
*/
void
RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk, Size spaceAvail)
@@ -187,7 +188,7 @@ RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk, Size spaceAvail)
/* Get the location of the FSM byte representing the heap block */
addr = fsm_get_location(heapBlk, &slot);
- fsm_set_and_search(rel, addr, slot, new_cat, 0);
+ fsm_set_recursive(rel, addr, slot, new_cat, false);
}
/*
@@ -257,7 +258,7 @@ XLogRecordPageWithFreeSpace(RelFileNode rnode, BlockNumber heapBlk,
if (PageIsNew(page))
PageInit(page, BLCKSZ, 0);
- if (fsm_set_avail(page, slot, new_cat))
+ if (fsm_set_avail(page, slot, new_cat, false))
MarkBufferDirtyHint(buf, false);
UnlockReleaseBuffer(buf);
}
@@ -677,7 +678,7 @@ fsm_set_and_search(Relation rel, FSMAddress addr, uint16 slot,
page = BufferGetPage(buf);
- if (fsm_set_avail(page, slot, newValue))
+ if (fsm_set_avail(page, slot, newValue, false))
MarkBufferDirtyHint(buf, false);
if (minValue != 0)
@@ -693,6 +694,40 @@ fsm_set_and_search(Relation rel, FSMAddress addr, uint16 slot,
return newslot;
}
+
+/*
+ * Set value in given FSM page and slot. If maximum value of a FSM page
+ * increased, then update value on upper FSM page.
+ */
+static void
+fsm_set_recursive(Relation rel, FSMAddress addr, uint16 slot, uint8 new_cat, bool only_increase)
+{
+ Buffer buf;
+ Page page;
+ uint8 max_avail;
+ uint8 max_avail2;
+
+ buf = fsm_readbuf(rel, addr, true);
+ LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
+
+ page = BufferGetPage(buf);
+
+ max_avail = fsm_get_max_avail(page);
+ if (fsm_set_avail(page, slot, new_cat, only_increase))
+ MarkBufferDirtyHint(buf, false);
+ max_avail2 = fsm_get_max_avail(page);
+
+ UnlockReleaseBuffer(buf);
+
+ if (max_avail < new_cat && addr.level != FSM_ROOT_LEVEL)
+ {
+ Assert(max_avail2 >= new_cat);
+
+ addr = fsm_get_parent(addr, &slot);
+ fsm_set_recursive(rel, addr, slot, max_avail2, true);
+ }
+}
+
/*
* Search the tree for a heap page with at least min_cat of free space
*/
@@ -828,7 +863,7 @@ fsm_vacuum_page(Relation rel, FSMAddress addr, bool *eof_p)
if (fsm_get_avail(page, slot) != child_avail)
{
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
- fsm_set_avail(BufferGetPage(buf), slot, child_avail);
+ fsm_set_avail(BufferGetPage(buf), slot, child_avail, false);
MarkBufferDirtyHint(buf, false);
LockBuffer(buf, BUFFER_LOCK_UNLOCK);
}
diff --git a/src/backend/storage/freespace/fsmpage.c b/src/backend/storage/freespace/fsmpage.c
index 987a2f5e53..97999adfad 100644
--- a/src/backend/storage/freespace/fsmpage.c
+++ b/src/backend/storage/freespace/fsmpage.c
@@ -60,7 +60,7 @@ rightneighbor(int x)
* The caller must hold an exclusive lock on the page.
*/
bool
-fsm_set_avail(Page page, int slot, uint8 value)
+fsm_set_avail(Page page, int slot, uint8 value, bool only_increasing)
{
int nodeno = NonLeafNodesPerPage + slot;
FSMPage fsmpage = (FSMPage) PageGetContents(page);
@@ -73,6 +73,8 @@ fsm_set_avail(Page page, int slot, uint8 value)
/* If the value hasn't changed, we don't need to do anything */
if (oldvalue == value && value <= fsmpage->fp_nodes[0])
return false;
+ if (only_increasing && oldvalue > value && value <= fsmpage->fp_nodes[0])
+ return false;
fsmpage->fp_nodes[nodeno] = value;
diff --git a/src/include/storage/fsm_internals.h b/src/include/storage/fsm_internals.h
index 4eb3fc12b1..12f4265c52 100644
--- a/src/include/storage/fsm_internals.h
+++ b/src/include/storage/fsm_internals.h
@@ -65,7 +65,7 @@ extern int fsm_search_avail(Buffer buf, uint8 min_cat, bool advancenext,
bool exclusive_lock_held);
extern uint8 fsm_get_avail(Page page, int slot);
extern uint8 fsm_get_max_avail(Page page);
-extern bool fsm_set_avail(Page page, int slot, uint8 value);
+extern bool fsm_set_avail(Page page, int slot, uint8 value, bool only_increasing);
extern bool fsm_truncate_avail(Page page, int nslots);
extern bool fsm_rebuild_page(Page page);
--
2.11.0
On 2017-07-21 20:41, Sokolov Yura wrote:
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?No, I definitely won't admit that. With default settings autovacuum
won't write more than ~2.3MB/s if I remember the math correctly, so if
you've got a 1TB table you're probably going to need a bigger value.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL CompanyI've seed autovacuum process spending >50% of its time in fsync
(with current ring buffer) (but I used autovacuum_cost_delay=2ms).
fsync could lasts up to second on hdd if there is concurrent IO.
Even on ssd fsync could be really noticeable.But, I agree that for 1TB table autovacuum_cost_limit still should
be increased, even with larger ring buffer.My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.Then I looked to strace of autovacuum process, and noticed storm
of fsync. I catched backtraces with gdb rooting on fsync, and
found that evicting dirty pages from small ring buffer it the
reason.After some experiments with combining my "early fsm update" and
size of ring buffer, I understood that increasing ring buffer
gives most of benefits: autovacuum runs faster, and bloating is
greatly reduced. On extreme case, 400mb table bloats to 17GB
on master, and only to 5GB with faster autovacuum.I used custom scripts, and that is why my statistic is not full.
Though, I didn't found performance reduction. In fact, it looks
like tests with "larger autovacuum ring" did more queries per hour
than tests against master.I will run pgbench for weekend, so latencies and percentiles
will be collected.With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Default pgbench script wasn't able to trigger autovacuum of
pgbench_accounts table in 8 hours (scale 400, 40 clients, 900tps
average), so weekend testing were not useful.
I will re-run with custom script for next day-two.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jul 24, 2017 at 6:37 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.This FSM thing is probably not a bad idea as well.
We're forced to run regular manual vacuums because for some tables
autovacuums seems to never be enough, no matter how it's configured,
mostly because it gets canceled all the time. These are high-churn,
huge tables, so vacuuming them takes hours or days, there's always
someone with a conflicting lock at some point that ends up canceling
the autovacuum task.The above paragraph triggered me to go check, and it seems in those
cases the FSM never gets vacuumed. That's probably not a good thing,
but I don't see how to vacuum the FSM after a cancel. So vacuuming the
FSM from time to time during long-running vacuums seems like a good
idea at this point.Attached patch changes fsm update: instead of updating only lowest
level, it propagates space increase up to root.It slows autovacuum a bit, so that I didn't propose it together with
ring buffer increase.
I was mostly thinking about something like the attached patch.
Simple, unintrusive, and shouldn't cause any noticeable slowdown.
Attachments:
0001-Vacuum-FSM-after-each-index-pass.patchtext/x-patch; charset=US-ASCII; name=0001-Vacuum-FSM-after-each-index-pass.patchDownload
From 5da264507058175e614f6ce7c77d2bd0491b1416 Mon Sep 17 00:00:00 2001
From: Claudio Freire <klaussfreire@gmail.com>
Date: Mon, 24 Jul 2017 13:09:10 -0300
Subject: [PATCH] Vacuum FSM after each index pass
This prevents concurrent writes from accumulating bloat due to
recently freed space being invisible in the FSM yet. When vacuum
can run for hours or days, this can make a huge difference.
---
src/backend/commands/vacuumlazy.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index fabb2f8d52..4d8d90e833 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -735,6 +735,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
/* Remove tuples from heap */
lazy_vacuum_heap(onerel, vacrelstats);
+ /* Vacuum the Free Space Map */
+ FreeSpaceMapVacuum(onerel);
+
/*
* Forget the now-vacuumed tuples, and press on, but be careful
* not to reset latestRemovedXid since we want that value to be
--
2.12.0
On 2017-07-24 19:11, Claudio Freire wrote:
On Mon, Jul 24, 2017 at 6:37 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.This FSM thing is probably not a bad idea as well.
We're forced to run regular manual vacuums because for some tables
autovacuums seems to never be enough, no matter how it's configured,
mostly because it gets canceled all the time. These are high-churn,
huge tables, so vacuuming them takes hours or days, there's always
someone with a conflicting lock at some point that ends up canceling
the autovacuum task.The above paragraph triggered me to go check, and it seems in those
cases the FSM never gets vacuumed. That's probably not a good thing,
but I don't see how to vacuum the FSM after a cancel. So vacuuming
the
FSM from time to time during long-running vacuums seems like a good
idea at this point.Attached patch changes fsm update: instead of updating only lowest
level, it propagates space increase up to root.It slows autovacuum a bit, so that I didn't propose it together with
ring buffer increase.I was mostly thinking about something like the attached patch.
Simple, unintrusive, and shouldn't cause any noticeable slowdown.
Your change is small, clear, and currently useful for huge tables under
high update load (until "allowing vacuum to use more than 1GB memory"
is merged).
But it still delays updating fsm until whole first batch of dead tuples
cleared (ie all indices scanned, and all heap pages cleared), and on
such
huge table it will be hours.
On the other hand, if "dead" tuples consumes all useful item pointer (
MaxHeapTuplesPerPage ~ 290 on 8k page), then space, that actually exists
on a page, could not be used until "dead" tuples are converted into
"unused" tuples.
With my patch I've seen that writing FSM until dead tuples cleared
helps a little: while bloating is slowed a little by this change, it
is stopped only after final cleaning of dead tuples.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jul 20, 2017 at 12:51 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
I think that's a valid point. There are also other concerns here -
e.g. whether instead of adopting the patch as proposed we ought to (a)
use some smaller size, or (b) keep the size as-is but reduce the
maximum fraction of shared_buffers that can be consumed, or (c) divide
the ring buffer size through by autovacuum_max_workers. Personally,
of those approaches, I favor (b). I think a 16MB ring buffer is
probably just fine if you've got 8GB of shared_buffers but I'm
skeptical about it when you've got 128MB of shared_buffers.WFM. I agree with *not* dividing the basic ring buffer size by
autovacuum_max_workers. If you have allocated more AV workers, I think
you expect AV to go faster, not for the workers to start fighting among
themselves.
But fighting among themselves is just what they do regarding the
autovacuum_vacuum_cost_limit, so I don't see why it should be one way there
but different here. The reason for setting autovacuum_max_workers to N is
so that small tables aren't completely starved of vacuuming even if N-1
larger tables are already being vacuumed simultaneously. Now the small
tables get vacuumed at speed 1/N, which kind of sucks, but that is the
mechanism we currently have.
Of course just because we are in a hole with vacuum_cost_limit doesn't mean
we should dig ourselves deeper, but we are being inconsistent then.
Cheers,
Jeff
On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:
On 2017-07-24 19:11, Claudio Freire wrote:
On Mon, Jul 24, 2017 at 6:37 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:Good day, Claudio
On 2017-07-22 00:27, Claudio Freire wrote:
On Fri, Jul 21, 2017 at 2:41 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.This FSM thing is probably not a bad idea as well.
We're forced to run regular manual vacuums because for some tables
autovacuums seems to never be enough, no matter how it's configured,
mostly because it gets canceled all the time. These are high-churn,
huge tables, so vacuuming them takes hours or days, there's always
someone with a conflicting lock at some point that ends up canceling
the autovacuum task.The above paragraph triggered me to go check, and it seems in those
cases the FSM never gets vacuumed. That's probably not a good thing,
but I don't see how to vacuum the FSM after a cancel. So vacuuming the
FSM from time to time during long-running vacuums seems like a good
idea at this point.Attached patch changes fsm update: instead of updating only lowest
level, it propagates space increase up to root.It slows autovacuum a bit, so that I didn't propose it together with
ring buffer increase.I was mostly thinking about something like the attached patch.
Simple, unintrusive, and shouldn't cause any noticeable slowdown.
Your change is small, clear, and currently useful for huge tables under
high update load (until "allowing vacuum to use more than 1GB memory"
is merged).
In high-bloat conditions, it doesn't take long to accumulate 1GB of
dead tuples (which is about 178M tuples, btw).
The index scan takes way longer than the heap scan in that case.
But it still delays updating fsm until whole first batch of dead tuples
cleared (ie all indices scanned, and all heap pages cleared), and on such
huge table it will be hours.
So, true, it will get delayed considerably. But as you realized,
there's not much point in trying to vacuum the FSM sooner, since it
won't be accurate shortly afterwards anyway. Dead line pointers do use
up a fair bit of space, especially on narrow tables.
In a particular table I have that exhibits this problem, most of the
time is spent scanning the index. It performs dozens of index scans
before it's done, so it would vacuum the FSM quite often enough, even
if I were to increase the mwm setting n-fold.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:On 2017-07-24 19:11, Claudio Freire wrote:
I was mostly thinking about something like the attached patch.
Simple, unintrusive, and shouldn't cause any noticeable slowdown.
Your change is small, clear, and currently useful for huge tables under
high update load (until "allowing vacuum to use more than 1GB memory"
is merged).In high-bloat conditions, it doesn't take long to accumulate 1GB of
dead tuples (which is about 178M tuples, btw).The index scan takes way longer than the heap scan in that case.
But it still delays updating fsm until whole first batch of dead tuples
cleared (ie all indices scanned, and all heap pages cleared), and on such
huge table it will be hours.So, true, it will get delayed considerably. But as you realized,
there's not much point in trying to vacuum the FSM sooner, since it
won't be accurate shortly afterwards anyway. Dead line pointers do use
up a fair bit of space, especially on narrow tables.In a particular table I have that exhibits this problem, most of the
time is spent scanning the index. It performs dozens of index scans
before it's done, so it would vacuum the FSM quite often enough, even
if I were to increase the mwm setting n-fold.
I hate to reply to myself, but I wanted to add: in any case, the case
I'm trying to avoid is the case where the FSM *never* gets vacuumed.
That's bad. But it may not be the phenomenon you're experiencing in
your tests.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-07-24 12:41, Sokolov Yura wrote:
On 2017-07-21 20:41, Sokolov Yura wrote:
On 2017-07-21 19:32, Robert Haas wrote:
On Fri, Jul 21, 2017 at 4:19 AM, Sokolov Yura
<funny.falcon@postgrespro.ru> wrote:Probably with increased ring buffer there is no need in raising
vacuum_cost_limit. Will you admit it?No, I definitely won't admit that. With default settings autovacuum
won't write more than ~2.3MB/s if I remember the math correctly, so
if
you've got a 1TB table you're probably going to need a bigger value.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL CompanyI've seed autovacuum process spending >50% of its time in fsync
(with current ring buffer) (but I used autovacuum_cost_delay=2ms).
fsync could lasts up to second on hdd if there is concurrent IO.
Even on ssd fsync could be really noticeable.But, I agree that for 1TB table autovacuum_cost_limit still should
be increased, even with larger ring buffer.My friend noticed, that I didn't said why I bother with autovacuum.
Our customers suffers from table bloating. I've made synthetic
bloating test, and started experiments with modifying micro- and
auto-vacuum. My first attempts were to update FSM early (both in
micro and autovacuum) and update it upto root, not only low level.Then I looked to strace of autovacuum process, and noticed storm
of fsync. I catched backtraces with gdb rooting on fsync, and
found that evicting dirty pages from small ring buffer it the
reason.After some experiments with combining my "early fsm update" and
size of ring buffer, I understood that increasing ring buffer
gives most of benefits: autovacuum runs faster, and bloating is
greatly reduced. On extreme case, 400mb table bloats to 17GB
on master, and only to 5GB with faster autovacuum.I used custom scripts, and that is why my statistic is not full.
Though, I didn't found performance reduction. In fact, it looks
like tests with "larger autovacuum ring" did more queries per hour
than tests against master.I will run pgbench for weekend, so latencies and percentiles
will be collected.With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyDefault pgbench script wasn't able to trigger autovacuum of
pgbench_accounts table in 8 hours (scale 400, 40 clients, 900tps
average), so weekend testing were not useful.I will re-run with custom script for next day-two.
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
I've maid 3*8hour runs with master and 16MB ring.
scale 400, 40 clients, query script:
\set aid1 random(1, 100000 * :scale)
\set aidd random(1, 3)
\set aid2 :aid1 + :aidd
\set aid3 :aid1 + 2 * :aidd
\set aid4 :aid1 + 3 * :aidd
\set aid5 :aid1 + 4 * :aidd
\set delta random(-5000, 5000)
update pgbench_accounts set abalance = abalance + :delta
where aid in (:aid1, :aid2, :aid3, :aid4, :aid5);
postgresql.conf:
max_connections = 300
shared_buffers = 2GB
work_mem = 128MB
maintenance_work_mem = 512MB
bgwriter_lru_maxpages = 10
bgwriter_flush_after = 2MB
backend_flush_after = 2MB
wal_compression = on
wal_buffers = 32MB
checkpoint_flush_after = 2MB
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_vacuum_scale_factor = 0.05
autovacuum_vacuum_cost_delay = 2ms
(I had to slow down bgwriter (bgwriter_lru_maxpages = 10),
cause otherwise all were too slow. May be I did it wrong)
(I open for all suggestion about postgresql.conf)
I've tried to make pretty log in testing5_pretty.tar.gz .
Files 'testing5_sum/test_master{,_ring16}_[123]/pretty.log contains
combined prettified logs from postgresql and pgbench.
Some excerpts:
test_master_1/pretty.log
time activity tps latency stddev min max
10980 av 1364 29ms 76ms 6ms 1170ms
11010 av 430 90ms 253ms 7ms 2472ms
11040 245 170ms 485ms 7ms 2821ms
11070 487 81ms 238ms 6ms 2404ms
11100 av 360 112ms 261ms 7ms 2549ms
11130 av+ch 198 198ms 374ms 7ms 1956ms
11160 av+ch 248 163ms 401ms 7ms 2601ms
11190 av+ch 321 125ms 363ms 7ms 2722ms
11220 av+ch 1155 35ms 123ms 7ms 2668ms
11250 av+ch 1390 29ms 79ms 7ms 1422ms
11280 av 624 64ms 176ms 6ms 1922ms
11310 av 454 87ms 236ms 7ms 2481ms
11340 av 524 77ms 223ms 6ms 2383ms
11370 av 414 96ms 267ms 7ms 2853ms
11400 av 380 103ms 235ms 7ms 2298ms
11430 av+ch 239 168ms 344ms 7ms 2482ms
11460 av+ch 138 293ms 516ms 7ms 2438ms
11490 av+ch 231 175ms 494ms 7ms 3150ms
11520 av+ch 1133 35ms 112ms 7ms 2378ms
11550 av+ch 1391 29ms 64ms 6ms 957ms
test_master_ring16_1/pretty.log
time activity tps latency stddev min max
10710 498 82ms 249ms 7ms 2801ms
10740 408 99ms 271ms 7ms 2793ms
10770 399 99ms 284ms 7ms 3233ms
10800 279 142ms 347ms 7ms 2641ms
10830 ch 245 164ms 436ms 7ms 2618ms
10860 ch 462 86ms 246ms 7ms 2497ms
10890 ch 78 506ms 905ms 6ms 3198ms
10920 ch 17 2407ms 217ms 1650ms 2678ms
10950 ch 652 64ms 272ms 7ms 2471ms
10980 av 976 41ms 126ms 6ms 2219ms
11010 av 379 104ms 257ms 7ms 2491ms
11040 av 381 107ms 274ms 7ms 2426ms
11070 av 325 123ms 294ms 6ms 2497ms
11100 av 226 173ms 387ms 7ms 1993ms
11130 av+ch 26 1575ms 635ms 101ms 2536ms
11160 av+ch 25 1552ms 648ms 58ms 2376ms
11190 av+ch 32 1275ms 726ms 16ms 2493ms
11220 av+ch 23 1584ms 674ms 48ms 2454ms
11250 av+ch 35 1235ms 777ms 22ms 3627ms
11280 av+ch 1301 30ms 145ms 6ms 2778ms
11310 av 903 46ms 125ms 7ms 2406ms
11340 av 395 100ms 291ms 7ms 2849ms
11370 av 377 103ms 255ms 7ms 2082ms
11400 av 340 114ms 309ms 7ms 3160ms
Archive testing5_all.tar.gz contains more raw logs.
My interpretation:
- autovacuum runs 3-4 times faster
(2700sec unpatched vs 800sec patched)
- faster autovacuum alone is not big problem.
While concurrent transactions are slower a bit, but not
catastrophically slower,
- faster autovacuum with checkpoint running simultaneously is a
big problem.
May be checkpoint process should affect autovacuum_cost ?
With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company
Attachments:
testing5_pretty.tar.gzapplication/x-gzip; name=testing5_pretty.tar.gzDownload
� T�xY �������$�q^�s��Z�KWc���
���lc|������^;]�����j����Dq��`��/�����������������~�������O������|������������������4Z{���_=��~J%����������?����������������~�������������w�����?����������������_��������_~���_�|������?~��O���W������r|�?�(s�G:~��Q�?���z������4gZ�������8��/�S���a��o���?r��e����)�_b���_9�c7��{]����l����9Mu�Y�n;�s7>��|}l�
n{4�r�o�����+g{`���?�\�m<���s��Mg��|}��q9�q9���6%��x���������SZ�����W�����0.���8��=��i���sx`�/X��N�������)����Ni�����]����N�;���=��e��a?�G�������i�v�vK������{��hU�n�'�.����
a�������v��I���a���=����~�����v>�m�j����i��]��W5=�������Oa|���Iv���W52�9`������z�M\y���N��W�r����Y '��'Y��iO�a�zq������0�����2V��}����{��O�a1�k�q~�vD�;������i�q�oi]YyX9�����L��u������}�u��������]����8r�����:%�F�������i�;�e��b����=F�d��U�� �l���I��~s�l���^7���
�<�q�o���a9 �����+��O;���d�]�����.���5�7������.p��=7�}��o�������q��� ���Sv��=�f|������
� =�H���c=��<,�Y�A?�k�UU�a��c �U��
�B:����f|}��
xU�O�^�����IC�p�*����j���\��d�9��r����������A1O�+�� <�n�y=��������s�+���#6u�2���oS��i�a���{R���Xy���qJ&>���2�my����Xu�<6�9e:�������v�3=��\����p����\�a��)��������h|U^��a��0+���������W(bX�c���.�X1�T�tc�����|��@�K(�o����Cy�E����2��*��XyX�m��:vc��)�<���F���9 +�
Q��R�+��I�0|Q���GO(9��O;>������=��+��}�����:r1�����w�������0+��T�nW^o������3��aT�/��N����u�2���|�(�Z�}��_U]e���"Y�:=�1�J ��S�/��Z8���Xy'NI��e��\�����L�{���MxX����A��`������0���r��c]��S�2}w�au�����<z��{�s�.<,@/��L8�Uu�aVEK{�~�Cz��0?b)��u=��<�{� 0�r ����}z��0J�9��{����g���s��U_����=���9��+�^�r��z �{w=���O���)�����
�|
��Z������'<�����#p��U�;6����0K���(��t���/���i�N����8(�xXr��:��\�*�5���c}U%�Sr��8���R�2Vy��5�!��W�]��am�i{�)�X�Y���VD;P<��\�:%����#6/x�d���$����')ExX1��jn{N�0}��Y��/SuS����0��������Vv�����7A����*<,5+=����r1,[��<�y`
������'�=���������<����;<��<l�=0:�
d)Mz�������JG�F�0�%��Em��KWV��{-Y#4������ �<