Auto-tuning work_mem and maintenance_work_mem

Started by Bruce Momjianover 12 years ago167 messageshackers
Jump to latest
#1Bruce Momjian
bruce@momjian.us

Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:

/messages/by-id/50ECCF93.3060101@agliodbs.com

The attached patch implements this, closely matching the default values
for the default shared_buffers value:

test=> SHOW shared_buffers;
shared_buffers
----------------
128MB
(1 row)

test=> SHOW work_mem;
work_mem
----------
1310kB
(1 row)

test=> SHOW maintenance_work_mem;
maintenance_work_mem
----------------------
20971kB
(1 row)

Previous defaults were 1MB and 16MB, but the new defaults don't match
exactly because our max_connections is a power of 10 (100), not a power
of 2 (128). Of course, if shared_buffer is 10x larger, those defaults
become 10x larger.

FYI, I based maintenance_work_mem's default on shared_buffers, not on
work_mem because it was too complex to change maintenance_work_mem when
someone changes work_mem.

I will work on auto-tuning temp_buffers next. Any other suggestions?
wal_buffers is already auto-tuned.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

Attachments:

work_mem.difftext/x-diff; charset=us-asciiDownload+116-24
#2Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#1)
Re: Auto-tuning work_mem and maintenance_work_mem

On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:

Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:

/messages/by-id/50ECCF93.3060101@agliodbs.com

The attached patch implements this, closely matching the default values
for the default shared_buffers value:

There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#2)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:

On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:

Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:

/messages/by-id/50ECCF93.3060101@agliodbs.com

The attached patch implements this, closely matching the default values
for the default shared_buffers value:

There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.

Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#3)
Re: Auto-tuning work_mem and maintenance_work_mem

On 2013-10-09 10:35:28 -0400, Bruce Momjian wrote:

On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:

On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:

Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:

/messages/by-id/50ECCF93.3060101@agliodbs.com

The attached patch implements this, closely matching the default values
for the default shared_buffers value:

There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.

Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.

Not at all. There's lots of OLTP workloads where huge shared buffers are
beneficial but you definitely don't want a huge work_mem.

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

But that's not how work_mem works. It's limiting memory, per node in the
query. So a complex query can use it several dozen times.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Pavel Stehule
pavel.stehule@gmail.com
In reply to: Bruce Momjian (#3)
Re: Auto-tuning work_mem and maintenance_work_mem

2013/10/9 Bruce Momjian <bruce@momjian.us>

On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:

On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:

Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:

/messages/by-id/50ECCF93.3060101@agliodbs.com

The attached patch implements this, closely matching the default values
for the default shared_buffers value:

There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.

Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.

In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.

maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection
/ 4

Regards

Pavel

Show quoted text

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#4)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 04:38:01PM +0200, Andres Freund wrote:

On 2013-10-09 10:35:28 -0400, Bruce Momjian wrote:

On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:

On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:

Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:

/messages/by-id/50ECCF93.3060101@agliodbs.com

The attached patch implements this, closely matching the default values
for the default shared_buffers value:

There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.

Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.

Not at all. There's lots of OLTP workloads where huge shared buffers are
beneficial but you definitely don't want a huge work_mem.

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

But that's not how work_mem works. It's limiting memory, per node in the
query. So a complex query can use it several dozen times.

True, but again, odds are all sessions are not going to use the full
work_mem allocation, so I figured assuming each session uses one full
work_mem was probably an over-estimate.

You are saying that auto-tuning work_mem for typical workloads is not a
win? I don't understand how that can be true. You can always change
the default for atypical workloads.

As a data point, users often set shared_buffers to 2GB, but the default
work_mem of 1MB would mean you would use perhaps 100MB for all sorting
--- that seems kind of small as a default.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Bruce Momjian
bruce@momjian.us
In reply to: Pavel Stehule (#5)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.

In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.

But it still is an independent parameter. I am just changing the default.

maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection / 4

That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#7)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 10:45:52AM -0400, Bruce Momjian wrote:

On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.

In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.

But it still is an independent parameter. I am just changing the default.

maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection / 4

That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.

FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Pavel Stehule
pavel.stehule@gmail.com
In reply to: Bruce Momjian (#8)
Re: Auto-tuning work_mem and maintenance_work_mem

2013/10/9 Bruce Momjian <bruce@momjian.us>

On Wed, Oct 9, 2013 at 10:45:52AM -0400, Bruce Momjian wrote:

On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:

Effectively, if every session uses one full work_mem, you end up

with

total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it

seems wise

to auto-scale it up to some extent based on shared_buffers.

In my experience a optimal value of work_mem depends on data and load,

so I

prefer a work_mem as independent parameter.

But it still is an independent parameter. I am just changing the

default.

maintenance_work_mem can depend on work_mem ~ work_mem * 1 *

max_connection / 4

That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.

FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.

then we should to use as base a how much dedicated RAM is for PG - not
shared buffers.

Pavel

Show quoted text

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

#10Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#7)
Re: Auto-tuning work_mem and maintenance_work_mem

On 10/09/2013 10:45 AM, Bruce Momjian wrote:

On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.

In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.

But it still is an independent parameter. I am just changing the default.

The danger with work_mem especially is that setting it too high can lead
to crashing postgres or your system at some stage down the track, so
autotuning it is kinda dangerous, much more dangerous than autotuning
shared buffers.

The assumption that each connection won't use lots of work_mem is also
false, I think, especially in these days of connection poolers.

I'm not saying don't do it, but I think we need to be quite conservative
about it. A reasonable default might be (shared_buffers / (n *
max_connections)) FSVO n, but I'm not sure what n should be. Instinct
says something like 4, but I have no data to back that up.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Bruce Momjian
bruce@momjian.us
In reply to: Pavel Stehule (#9)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 05:01:24PM +0200, Pavel Stehule wrote:

FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. �It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.

then we should to use as base a how much dedicated RAM is for PG - not shared
buffers.

Yes, that was Josh Berkus's suggestion, and we can switch to that,
though it requires a new GUC parameter, and then shared_buffers gets
tuned on that.

I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Pavel Stehule
pavel.stehule@gmail.com
In reply to: Bruce Momjian (#11)
Re: Auto-tuning work_mem and maintenance_work_mem

2013/10/9 Bruce Momjian <bruce@momjian.us>

On Wed, Oct 9, 2013 at 05:01:24PM +0200, Pavel Stehule wrote:

FYI, this auto-tuning is not for us, who understand the parameters

and

how they interact, but for the 90% of our users who would benefit

from

better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.

then we should to use as base a how much dedicated RAM is for PG - not

shared

buffers.

Yes, that was Josh Berkus's suggestion, and we can switch to that,
though it requires a new GUC parameter, and then shared_buffers gets
tuned on that.

I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.

I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.

Pavel

Show quoted text

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

#13Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#10)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 11:06:07AM -0400, Andrew Dunstan wrote:

On 10/09/2013 10:45 AM, Bruce Momjian wrote:

On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:

Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.

We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.

In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.

But it still is an independent parameter. I am just changing the default.

The danger with work_mem especially is that setting it too high can
lead to crashing postgres or your system at some stage down the
track, so autotuning it is kinda dangerous, much more dangerous than
autotuning shared buffers.

Good point.

The assumption that each connection won't use lots of work_mem is
also false, I think, especially in these days of connection poolers.

OK, makes sense because the sessions last longer.

I'm not saying don't do it, but I think we need to be quite
conservative about it. A reasonable default might be (shared_buffers
/ (n * max_connections)) FSVO n, but I'm not sure what n should be.
Instinct says something like 4, but I have no data to back that up.

I am fine with '4' --- worked as an effective_cache_size multipler. ;-)
I think we should try to hit the existing defaults, which would mean we
would use this computation:

(shared_buffers / 4) / max_connections + 768k / BUFSZ

This would give us for a default 128MB shared buffers and 100
max_connections:

(16384 / 4) / 100 + (768 * 1024) / 8192

which gives us 136, and that is 136 * 8192 or 1088k, close to 1MB.

For 10x shared buffers, 163840, it gives a work_mem of 4040k, rather
than the 10M I was computing in the original patch.

How is that?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Bruce Momjian
bruce@momjian.us
In reply to: Pavel Stehule (#12)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 06:20:13PM +0200, Pavel Stehule wrote:

On Wed, Oct �9, 2013 at 05:01:24PM +0200, Pavel Stehule wrote:

� � FYI, this auto-tuning is not for us, who understand the parameters

and

� � how they interact, but for the 90% of our users who would benefit

from

� � better defaults. �It is true that there might now be cases where you
� � would need to _reduce_ work_mem from its default, but I think the new
� � computed default will be better for most users.

then we should to use as base a how much dedicated RAM is for PG - not

shared

buffers.

Yes, that was Josh Berkus's suggestion, and we can switch to that,
though it requires a new GUC parameter, and then shared_buffers gets
tuned on that.

I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. �I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.

I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.

OK, I can easily do that. What do others think?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Stephen Frost
sfrost@snowman.net
In reply to: Pavel Stehule (#12)
Re: Auto-tuning work_mem and maintenance_work_mem

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2013/10/9 Bruce Momjian <bruce@momjian.us>

I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.

I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.

I disagree- having a better default than what we have now is going to
almost certainly be a huge improvement in the vast majority of cases.
How we arrive at the default isn't particularly relevant as long as we
document it. Users who end up using the default don't do so because
they read the docs and said "oh, yeah, the way they calculated the
default makes a lot of sense", then end up using it because they never
open the config file, at all.

In other words, I think the set of people who would appreciate having
the default calculated in a good way has no intersection with the set of
people who *use* the default values, which is the audience that the
default values are for.

Thanks,

Stephen

#16Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#13)
Re: Auto-tuning work_mem and maintenance_work_mem

* Bruce Momjian (bruce@momjian.us) wrote:

I think we should try to hit the existing defaults, which would mean we
would use this computation:

For my 2c, I was hoping this would improve things for our users by
raising the tiny 1M default work_mem, so I don't agree that we should
simply be coming up with an algorithm to hit the same numbers we already
have today.

(shared_buffers / 4) / max_connections + 768k / BUFSZ

This would give us for a default 128MB shared buffers and 100
max_connections:

(16384 / 4) / 100 + (768 * 1024) / 8192

which gives us 136, and that is 136 * 8192 or 1088k, close to 1MB.

For 10x shared buffers, 163840, it gives a work_mem of 4040k, rather
than the 10M I was computing in the original patch.

How is that?

So this would only help if people are already going in and modifying
shared_buffers, but not setting work_mem? I'd rather see better
defaults for users that don't touch anything, such as 4MB or even
larger.

Thanks,

Stephen

#17Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
Re: Auto-tuning work_mem and maintenance_work_mem

On 10/09/2013 09:30 AM, Stephen Frost wrote:

I went with shared_buffers because unlike the others, it is a fixed

allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.

I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.

I disagree- having a better default than what we have now is going to
almost certainly be a huge improvement in the vast majority of cases.
How we arrive at the default isn't particularly relevant as long as we
document it. Users who end up using the default don't do so because
they read the docs and said "oh, yeah, the way they calculated the
default makes a lot of sense", then end up using it because they never
open the config file, at all.

FWIW, I've been using the following calculations as "starting points"
for work_mem with both clients and students. In 80-90% of cases, the
user never adjusts the thresholds again, so I'd say that passes the test
for a "good enough" setting.

The main goal is (a) not to put a default low ceiling on work_mem for
people who have lots of RAM and (b) lower the limit for users who have
way too many connections on a low-RAM machine.

# Most web applications should use the formula below, because their
# queries often require no work_mem.
# work_mem = ( AvRAM / max_connections ) ROUND DOWN to 2^x
# work_mem = 4MB # for 2GB server with 300 connections
# Solaris: cut the above in half.

# Formula for most BI/DW applications, or others running many complex
# queries:
# work_mem = ( AvRAM / ( 2 * max_connections ) ) ROUND DOWN to 2^x
# work_mem = 128MB # DW server with 32GB RAM and 40 connections

AvRAM is "available ram", which for purposes of this approach would be
4X shared_buffers. So the final formula would be:

shared_buffers * 4 / max_connections = work_mem

*however*, there's a couple problems with autotuning the above:

1) it's strongly workload-dependant; we need to know if the user is
doing DW or OLTP.

2) few users adjust their max_connections downwards, even when it's
warranted.

3) we also need to know if the user is on a platform like Solaris or
FreeBSD which doesn't overcommit RAM allocations per-backend.

BTW, in extensive testing of DW workloads, I've never seen an individual
backend allocate more than 3X work_mem total.

So if we want a completely generic limit, I would say:

1MB << (shared_buffers * 2 / max_connections) << 256MB

That is: divide double shared buffers by max_connections. If that's
over 1MB, raise it, but not further than 256MB.

Overall, our real answer to autotuning work_mem is to have work_mem
admissions control, per Kevin's proposal a couple years ago.

maintenance_work_mem is easier, because we only really care about the
number of autovacuum daemons, which is usually 3. so:

8MB << (shared_buffers / autovacuum_workers) << 256MB

... would do it.

Note that I'd expect to adjust the upper limits of these ranges each
year, as larger and larger RAM becomes commonplace and as we work out
PG's issues with using large RAM blocks.

I'm not sure that temp_buffers can be autotuned at all. We'd have to
make assumptions about how many temp tables a particular application
uses, which is going to be either "a lot" or "none at all". However, at
a stab:

1MB << (shared_buffers * 4 / max_connections) << 512MB

QUESTION: at one time (7.2?), we allocated work_mem purely by doubling
RAM requests, which meant that setting work_mem to any non-binary value
meant you actually got the next lowest binary value. Is that no longer
true?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18David Fetter
david@fetter.org
In reply to: Stephen Frost (#15)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 09, 2013 at 12:30:22PM -0400, Stephen Frost wrote:

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2013/10/9 Bruce Momjian <bruce@momjian.us>

I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.

I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.

I disagree- having a better default than what we have now is going to
almost certainly be a huge improvement in the vast majority of cases.
How we arrive at the default isn't particularly relevant as long as we
document it. Users who end up using the default don't do so because
they read the docs and said "oh, yeah, the way they calculated the
default makes a lot of sense", then end up using it because they never
open the config file, at all.

In other words, I think the set of people who would appreciate having
the default calculated in a good way has no intersection with the set of
people who *use* the default values, which is the audience that the
default values are for.

+1 for setting defaults which assume an at least vaguely modern piece
of hardware.

By and large, people are not installing PostgreSQL for the very first
time on a server the newest component of which is ten years old.

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#13)
Re: Auto-tuning work_mem and maintenance_work_mem

On Wed, Oct 9, 2013 at 12:25:49PM -0400, Bruce Momjian wrote:

I'm not saying don't do it, but I think we need to be quite
conservative about it. A reasonable default might be (shared_buffers
/ (n * max_connections)) FSVO n, but I'm not sure what n should be.
Instinct says something like 4, but I have no data to back that up.

I am fine with '4' --- worked as an effective_cache_size multipler. ;-)
I think we should try to hit the existing defaults, which would mean we
would use this computation:

(shared_buffers / 4) / max_connections + 768k / BUFSZ

This would give us for a default 128MB shared buffers and 100
max_connections:

(16384 / 4) / 100 + (768 * 1024) / 8192

which gives us 136, and that is 136 * 8192 or 1088k, close to 1MB.

For 10x shared buffers, 163840, it gives a work_mem of 4040k, rather
than the 10M I was computing in the original patch.

How is that?

In summary, that would be 615MB for shared_buffers of 2GB, assuming one
work_mem per session, and assuming you are running the maximum number of
sessions, which you would not normally do.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ Everyone has their own god. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Joshua D. Drake
jd@commandprompt.com
In reply to: Bruce Momjian (#8)
Re: Auto-tuning work_mem and maintenance_work_mem

On 10/09/2013 07:58 AM, Bruce Momjian wrote:

But it still is an independent parameter. I am just changing the default.

maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection / 4

That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.

FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.

Just to step in here as a consultant. Bruce is right on here. Autotuning
has nothing to do with us, it has to do with Rails developers who deploy
PostgreSQL and known nothing of it except what ActiveRecord tells them
(I am not being rude here).

We could argue all day what the best equation is for this, the key is to
pick something reasonable, not perfect.

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
a rose in the deeps of my heart. - W.B. Yeats

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Bruce Momjian
bruce@momjian.us
In reply to: Stephen Frost (#16)
#22Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#23Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#1)
#24Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#23)
#25Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#24)
#26Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#25)
#27Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#26)
#28Andres Freund
andres@anarazel.de
In reply to: Joshua D. Drake (#20)
#29Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#27)
In reply to: Bruce Momjian (#1)
#31Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#32Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#17)
#33Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#31)
In reply to: Bruce Momjian (#32)
#35Bruce Momjian
bruce@momjian.us
In reply to: Peter Geoghegan (#34)
In reply to: Bruce Momjian (#35)
#37Bruce Momjian
bruce@momjian.us
In reply to: Peter Geoghegan (#36)
In reply to: Bruce Momjian (#37)
#39Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#29)
#40Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#39)
#41Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#40)
#42Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#40)
In reply to: Stephen Frost (#42)
#44Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#41)
#45Bruce Momjian
bruce@momjian.us
In reply to: Peter Geoghegan (#43)
#46Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#41)
In reply to: Bruce Momjian (#45)
In reply to: Bruce Momjian (#46)
#49Bruce Momjian
bruce@momjian.us
In reply to: Peter Geoghegan (#47)
In reply to: Bruce Momjian (#49)
#51Magnus Hagander
magnus@hagander.net
In reply to: Peter Geoghegan (#38)
#52Magnus Hagander
magnus@hagander.net
In reply to: Bruce Momjian (#45)
#53Magnus Hagander
magnus@hagander.net
In reply to: Peter Geoghegan (#50)
#54Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#50)
#55Robert Haas
robertmhaas@gmail.com
In reply to: Magnus Hagander (#52)
#56Stephen Frost
sfrost@snowman.net
In reply to: Peter Geoghegan (#43)
#57Stephen Frost
sfrost@snowman.net
In reply to: Magnus Hagander (#52)
#58Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Bruce Momjian (#1)
#59Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#54)
#60Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Robert Haas (#55)
#61Bruce Momjian
bruce@momjian.us
In reply to: Tsunakawa, Takayuki (#58)
#62Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Tsunakawa, Takayuki (#58)
#63Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#59)
#64Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#63)
#65Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#41)
#66Bruce Momjian
bruce@momjian.us
In reply to: Stephen Frost (#64)
#67Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#66)
#68Bruce Momjian
bruce@momjian.us
In reply to: Stephen Frost (#67)
#69Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#68)
#70Bruce Momjian
bruce@momjian.us
In reply to: Stephen Frost (#69)
#71Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#70)
#72Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#71)
#73Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#72)
#74Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#73)
#75Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#76Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#3)
#77Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#76)
#78Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#49)
#79Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#70)
#80Jeff Janes
jeff.janes@gmail.com
In reply to: Andrew Dunstan (#10)
#81Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#8)
#82Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#8)
#83Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#70)
#84Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#78)
In reply to: Robert Haas (#83)
#86Peter Eisentraut
peter_e@gmx.net
In reply to: Bruce Momjian (#65)
In reply to: Robert Haas (#84)
#88Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#49)
#89Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#88)
In reply to: Magnus Hagander (#51)
#91Magnus Hagander
magnus@hagander.net
In reply to: Robert Haas (#83)
#92Magnus Hagander
magnus@hagander.net
In reply to: Robert Haas (#89)
#93Peter Eisentraut
peter_e@gmx.net
In reply to: Bruce Momjian (#68)
#94Chris Browne
cbbrowne@acm.org
In reply to: Bruce Momjian (#70)
#95Robert Haas
robertmhaas@gmail.com
In reply to: Chris Browne (#94)
#96Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#49)
#97Bruce Momjian
bruce@momjian.us
In reply to: Jeff Janes (#80)
#98Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#81)
#99Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#100Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#86)
#101Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#99)
#102Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#96)
#103Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Tsunakawa, Takayuki (#60)
#104Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#98)
#105Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#96)
#106Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#49)
#107Magnus Hagander
magnus@hagander.net
In reply to: Chris Browne (#94)
#108Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#106)
#109Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#68)
#110Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Josh Berkus (#109)
#111Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Bruce Momjian (#61)
#112Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Dimitri Fontaine (#62)
#113Magnus Hagander
magnus@hagander.net
In reply to: Tsunakawa, Takayuki (#111)
#114Magnus Hagander
magnus@hagander.net
In reply to: Josh Berkus (#109)
#115Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Tsunakawa, Takayuki (#112)
#116Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Magnus Hagander (#113)
#117Andres Freund
andres@anarazel.de
In reply to: Magnus Hagander (#113)
#118Robert Haas
robertmhaas@gmail.com
In reply to: Magnus Hagander (#114)
#119Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Robert Haas (#118)
#120Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Magnus Hagander (#113)
#121Andres Freund
andres@anarazel.de
In reply to: Tsunakawa, Takayuki (#120)
#122Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Dimitri Fontaine (#115)
#123Magnus Hagander
magnus@hagander.net
In reply to: Tsunakawa, Takayuki (#122)
#124Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#125Andres Freund
andres@anarazel.de
In reply to: Josh Berkus (#124)
#126Magnus Hagander
magnus@hagander.net
In reply to: Andres Freund (#125)
#127Andres Freund
andres@anarazel.de
In reply to: Magnus Hagander (#126)
#128Magnus Hagander
magnus@hagander.net
In reply to: Andres Freund (#127)
#129Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Magnus Hagander (#123)
#130Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#10)
#131Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#130)
#132Claudio Freire
klaussfreire@gmail.com
In reply to: Bruce Momjian (#131)
#133Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#134Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#133)
#135Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#134)
#136Joshua D. Drake
jd@commandprompt.com
In reply to: Kevin Grittner (#135)
#137Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#138Robert Haas
robertmhaas@gmail.com
In reply to: Joshua D. Drake (#136)
#139Joshua D. Drake
jd@commandprompt.com
In reply to: Robert Haas (#138)
#140Jeff Janes
jeff.janes@gmail.com
In reply to: Joshua D. Drake (#136)
#141Joshua D. Drake
jd@commandprompt.com
In reply to: Jeff Janes (#140)
#142Merlin Moncure
mmoncure@gmail.com
In reply to: Robert Haas (#134)
#143Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#1)
#144Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#110)
#145Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#144)
#146Josh Berkus
josh@agliodbs.com
In reply to: Robert Haas (#95)
#147Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Robert Haas (#145)
#148Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gavin Flower (#147)
#149Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#145)
#150Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#145)
#151Stephen Frost
sfrost@snowman.net
In reply to: Andres Freund (#150)
#152Andres Freund
andres@anarazel.de
In reply to: Stephen Frost (#151)
#153Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#150)
#154Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#153)
#155Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#154)
#156Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#152)
#157Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#156)
#158Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#157)
#159Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Tom Lane (#148)
In reply to: Stephen Frost (#151)
#161Jeff Janes
jeff.janes@gmail.com
In reply to: Robert Haas (#145)
#162Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#149)
#163Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#162)
#164Craig Ringer
craig@2ndquadrant.com
In reply to: Andres Freund (#150)
#165Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Andres Freund (#150)
#166Josh Berkus
josh@agliodbs.com
In reply to: Josh Berkus (#96)
#167Jeff Janes
jeff.janes@gmail.com
In reply to: Kevin Grittner (#165)