Auto-tuning work_mem and maintenance_work_mem
Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:
/messages/by-id/50ECCF93.3060101@agliodbs.com
The attached patch implements this, closely matching the default values
for the default shared_buffers value:
test=> SHOW shared_buffers;
shared_buffers
----------------
128MB
(1 row)
test=> SHOW work_mem;
work_mem
----------
1310kB
(1 row)
test=> SHOW maintenance_work_mem;
maintenance_work_mem
----------------------
20971kB
(1 row)
Previous defaults were 1MB and 16MB, but the new defaults don't match
exactly because our max_connections is a power of 10 (100), not a power
of 2 (128). Of course, if shared_buffer is 10x larger, those defaults
become 10x larger.
FYI, I based maintenance_work_mem's default on shared_buffers, not on
work_mem because it was too complex to change maintenance_work_mem when
someone changes work_mem.
I will work on auto-tuning temp_buffers next. Any other suggestions?
wal_buffers is already auto-tuned.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
Attachments:
work_mem.difftext/x-diff; charset=us-asciiDownload+116-24
On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:
Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:/messages/by-id/50ECCF93.3060101@agliodbs.com
The attached patch implements this, closely matching the default values
for the default shared_buffers value:
There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:
On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:
Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:/messages/by-id/50ECCF93.3060101@agliodbs.com
The attached patch implements this, closely matching the default values
for the default shared_buffers value:There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.
Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.
Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.
We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2013-10-09 10:35:28 -0400, Bruce Momjian wrote:
On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:
On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:
Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:/messages/by-id/50ECCF93.3060101@agliodbs.com
The attached patch implements this, closely matching the default values
for the default shared_buffers value:There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.
Not at all. There's lots of OLTP workloads where huge shared buffers are
beneficial but you definitely don't want a huge work_mem.
Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.
But that's not how work_mem works. It's limiting memory, per node in the
query. So a complex query can use it several dozen times.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2013/10/9 Bruce Momjian <bruce@momjian.us>
On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:
On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:
Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:/messages/by-id/50ECCF93.3060101@agliodbs.com
The attached patch implements this, closely matching the default values
for the default shared_buffers value:There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.
In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.
maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection
/ 4
Regards
Pavel
Show quoted text
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 04:38:01PM +0200, Andres Freund wrote:
On 2013-10-09 10:35:28 -0400, Bruce Momjian wrote:
On Wed, Oct 9, 2013 at 04:32:44PM +0200, Andres Freund wrote:
On 2013-10-09 10:30:46 -0400, Bruce Momjian wrote:
Josh Berkus suggested here that work_mem and maintenance_work_mem could
be auto-tuned like effective_cache_size:/messages/by-id/50ECCF93.3060101@agliodbs.com
The attached patch implements this, closely matching the default values
for the default shared_buffers value:There imo is no correlation between correct values for shared_buffers
and work_mem at all. They really are much more workload dependant than
anything.Well, that is true, but the more shared_buffers you allocate, the more
work_mem you _probably_ want to use. This is only a change of the
default.Not at all. There's lots of OLTP workloads where huge shared buffers are
beneficial but you definitely don't want a huge work_mem.Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.But that's not how work_mem works. It's limiting memory, per node in the
query. So a complex query can use it several dozen times.
True, but again, odds are all sessions are not going to use the full
work_mem allocation, so I figured assuming each session uses one full
work_mem was probably an over-estimate.
You are saying that auto-tuning work_mem for typical workloads is not a
win? I don't understand how that can be true. You can always change
the default for atypical workloads.
As a data point, users often set shared_buffers to 2GB, but the default
work_mem of 1MB would mean you would use perhaps 100MB for all sorting
--- that seems kind of small as a default.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.
But it still is an independent parameter. I am just changing the default.
maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection / 4
That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 10:45:52AM -0400, Bruce Momjian wrote:
On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.But it still is an independent parameter. I am just changing the default.
maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection / 4
That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.
FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2013/10/9 Bruce Momjian <bruce@momjian.us>
On Wed, Oct 9, 2013 at 10:45:52AM -0400, Bruce Momjian wrote:
On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
Effectively, if every session uses one full work_mem, you end up
with
total work_mem usage equal to shared_buffers.
We can try a different algorithm to scale up work_mem, but it
seems wise
to auto-scale it up to some extent based on shared_buffers.
In my experience a optimal value of work_mem depends on data and load,
so I
prefer a work_mem as independent parameter.
But it still is an independent parameter. I am just changing the
default.
maintenance_work_mem can depend on work_mem ~ work_mem * 1 *
max_connection / 4
That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.
then we should to use as base a how much dedicated RAM is for PG - not
shared buffers.
Pavel
Show quoted text
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com+ Everyone has their own god. +
On 10/09/2013 10:45 AM, Bruce Momjian wrote:
On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.But it still is an independent parameter. I am just changing the default.
The danger with work_mem especially is that setting it too high can lead
to crashing postgres or your system at some stage down the track, so
autotuning it is kinda dangerous, much more dangerous than autotuning
shared buffers.
The assumption that each connection won't use lots of work_mem is also
false, I think, especially in these days of connection poolers.
I'm not saying don't do it, but I think we need to be quite conservative
about it. A reasonable default might be (shared_buffers / (n *
max_connections)) FSVO n, but I'm not sure what n should be. Instinct
says something like 4, but I have no data to back that up.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 05:01:24PM +0200, Pavel Stehule wrote:
FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. �It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.then we should to use as base a how much dedicated RAM is for PG - not shared
buffers.
Yes, that was Josh Berkus's suggestion, and we can switch to that,
though it requires a new GUC parameter, and then shared_buffers gets
tuned on that.
I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2013/10/9 Bruce Momjian <bruce@momjian.us>
On Wed, Oct 9, 2013 at 05:01:24PM +0200, Pavel Stehule wrote:
FYI, this auto-tuning is not for us, who understand the parameters
and
how they interact, but for the 90% of our users who would benefit
from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.then we should to use as base a how much dedicated RAM is for PG - not
shared
buffers.
Yes, that was Josh Berkus's suggestion, and we can switch to that,
though it requires a new GUC parameter, and then shared_buffers gets
tuned on that.I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.
I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.
Pavel
Show quoted text
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com+ Everyone has their own god. +
On Wed, Oct 9, 2013 at 11:06:07AM -0400, Andrew Dunstan wrote:
On 10/09/2013 10:45 AM, Bruce Momjian wrote:
On Wed, Oct 9, 2013 at 04:40:38PM +0200, Pavel Stehule wrote:
Effectively, if every session uses one full work_mem, you end up with
total work_mem usage equal to shared_buffers.We can try a different algorithm to scale up work_mem, but it seems wise
to auto-scale it up to some extent based on shared_buffers.In my experience a optimal value of work_mem depends on data and load, so I
prefer a work_mem as independent parameter.But it still is an independent parameter. I am just changing the default.
The danger with work_mem especially is that setting it too high can
lead to crashing postgres or your system at some stage down the
track, so autotuning it is kinda dangerous, much more dangerous than
autotuning shared buffers.
Good point.
The assumption that each connection won't use lots of work_mem is
also false, I think, especially in these days of connection poolers.
OK, makes sense because the sessions last longer.
I'm not saying don't do it, but I think we need to be quite
conservative about it. A reasonable default might be (shared_buffers
/ (n * max_connections)) FSVO n, but I'm not sure what n should be.
Instinct says something like 4, but I have no data to back that up.
I am fine with '4' --- worked as an effective_cache_size multipler. ;-)
I think we should try to hit the existing defaults, which would mean we
would use this computation:
(shared_buffers / 4) / max_connections + 768k / BUFSZ
This would give us for a default 128MB shared buffers and 100
max_connections:
(16384 / 4) / 100 + (768 * 1024) / 8192
which gives us 136, and that is 136 * 8192 or 1088k, close to 1MB.
For 10x shared buffers, 163840, it gives a work_mem of 4040k, rather
than the 10M I was computing in the original patch.
How is that?
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 06:20:13PM +0200, Pavel Stehule wrote:
On Wed, Oct �9, 2013 at 05:01:24PM +0200, Pavel Stehule wrote:
� � FYI, this auto-tuning is not for us, who understand the parameters
and
� � how they interact, but for the 90% of our users who would benefit
from
� � better defaults. �It is true that there might now be cases where you
� � would need to _reduce_ work_mem from its default, but I think the new
� � computed default will be better for most users.then we should to use as base a how much dedicated RAM is for PG - not
shared
buffers.
Yes, that was Josh Berkus's suggestion, and we can switch to that,
though it requires a new GUC parameter, and then shared_buffers gets
tuned on that.I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. �I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.
OK, I can easily do that. What do others think?
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
* Pavel Stehule (pavel.stehule@gmail.com) wrote:
2013/10/9 Bruce Momjian <bruce@momjian.us>
I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.
I disagree- having a better default than what we have now is going to
almost certainly be a huge improvement in the vast majority of cases.
How we arrive at the default isn't particularly relevant as long as we
document it. Users who end up using the default don't do so because
they read the docs and said "oh, yeah, the way they calculated the
default makes a lot of sense", then end up using it because they never
open the config file, at all.
In other words, I think the set of people who would appreciate having
the default calculated in a good way has no intersection with the set of
people who *use* the default values, which is the audience that the
default values are for.
Thanks,
Stephen
* Bruce Momjian (bruce@momjian.us) wrote:
I think we should try to hit the existing defaults, which would mean we
would use this computation:
For my 2c, I was hoping this would improve things for our users by
raising the tiny 1M default work_mem, so I don't agree that we should
simply be coming up with an algorithm to hit the same numbers we already
have today.
(shared_buffers / 4) / max_connections + 768k / BUFSZ
This would give us for a default 128MB shared buffers and 100
max_connections:(16384 / 4) / 100 + (768 * 1024) / 8192
which gives us 136, and that is 136 * 8192 or 1088k, close to 1MB.
For 10x shared buffers, 163840, it gives a work_mem of 4040k, rather
than the 10M I was computing in the original patch.How is that?
So this would only help if people are already going in and modifying
shared_buffers, but not setting work_mem? I'd rather see better
defaults for users that don't touch anything, such as 4MB or even
larger.
Thanks,
Stephen
On 10/09/2013 09:30 AM, Stephen Frost wrote:
I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.I disagree- having a better default than what we have now is going to
almost certainly be a huge improvement in the vast majority of cases.
How we arrive at the default isn't particularly relevant as long as we
document it. Users who end up using the default don't do so because
they read the docs and said "oh, yeah, the way they calculated the
default makes a lot of sense", then end up using it because they never
open the config file, at all.
FWIW, I've been using the following calculations as "starting points"
for work_mem with both clients and students. In 80-90% of cases, the
user never adjusts the thresholds again, so I'd say that passes the test
for a "good enough" setting.
The main goal is (a) not to put a default low ceiling on work_mem for
people who have lots of RAM and (b) lower the limit for users who have
way too many connections on a low-RAM machine.
# Most web applications should use the formula below, because their
# queries often require no work_mem.
# work_mem = ( AvRAM / max_connections ) ROUND DOWN to 2^x
# work_mem = 4MB # for 2GB server with 300 connections
# Solaris: cut the above in half.
# Formula for most BI/DW applications, or others running many complex
# queries:
# work_mem = ( AvRAM / ( 2 * max_connections ) ) ROUND DOWN to 2^x
# work_mem = 128MB # DW server with 32GB RAM and 40 connections
AvRAM is "available ram", which for purposes of this approach would be
4X shared_buffers. So the final formula would be:
shared_buffers * 4 / max_connections = work_mem
*however*, there's a couple problems with autotuning the above:
1) it's strongly workload-dependant; we need to know if the user is
doing DW or OLTP.
2) few users adjust their max_connections downwards, even when it's
warranted.
3) we also need to know if the user is on a platform like Solaris or
FreeBSD which doesn't overcommit RAM allocations per-backend.
BTW, in extensive testing of DW workloads, I've never seen an individual
backend allocate more than 3X work_mem total.
So if we want a completely generic limit, I would say:
1MB << (shared_buffers * 2 / max_connections) << 256MB
That is: divide double shared buffers by max_connections. If that's
over 1MB, raise it, but not further than 256MB.
Overall, our real answer to autotuning work_mem is to have work_mem
admissions control, per Kevin's proposal a couple years ago.
maintenance_work_mem is easier, because we only really care about the
number of autovacuum daemons, which is usually 3. so:
8MB << (shared_buffers / autovacuum_workers) << 256MB
... would do it.
Note that I'd expect to adjust the upper limits of these ranges each
year, as larger and larger RAM becomes commonplace and as we work out
PG's issues with using large RAM blocks.
I'm not sure that temp_buffers can be autotuned at all. We'd have to
make assumptions about how many temp tables a particular application
uses, which is going to be either "a lot" or "none at all". However, at
a stab:
1MB << (shared_buffers * 4 / max_connections) << 512MB
QUESTION: at one time (7.2?), we allocated work_mem purely by doubling
RAM requests, which meant that setting work_mem to any non-binary value
meant you actually got the next lowest binary value. Is that no longer
true?
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Import Notes
Reply to msg id not found: WM4aed4e890cd2bedd3e2bc8d26703cc904e414dfd9a3ac9298f94eea38417dbebf548b70ad0917135622364322d10e356@asav-1.01.com
On Wed, Oct 09, 2013 at 12:30:22PM -0400, Stephen Frost wrote:
* Pavel Stehule (pavel.stehule@gmail.com) wrote:
2013/10/9 Bruce Momjian <bruce@momjian.us>
I went with shared_buffers because unlike the others, it is a fixed
allocation quantity, while the other are much more variable and harder
to set. I figured we could keep our 25% estimate of shared_buffers and
everything else would fall in line.I understand, but your proposal change a logic to opposite direction. Maybe
better is wait to new GUC parameter, and then implement this feature, so be
logical and simply understandable.I disagree- having a better default than what we have now is going to
almost certainly be a huge improvement in the vast majority of cases.
How we arrive at the default isn't particularly relevant as long as we
document it. Users who end up using the default don't do so because
they read the docs and said "oh, yeah, the way they calculated the
default makes a lot of sense", then end up using it because they never
open the config file, at all.In other words, I think the set of people who would appreciate having
the default calculated in a good way has no intersection with the set of
people who *use* the default values, which is the audience that the
default values are for.
+1 for setting defaults which assume an at least vaguely modern piece
of hardware.
By and large, people are not installing PostgreSQL for the very first
time on a server the newest component of which is ten years old.
Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Oct 9, 2013 at 12:25:49PM -0400, Bruce Momjian wrote:
I'm not saying don't do it, but I think we need to be quite
conservative about it. A reasonable default might be (shared_buffers
/ (n * max_connections)) FSVO n, but I'm not sure what n should be.
Instinct says something like 4, but I have no data to back that up.I am fine with '4' --- worked as an effective_cache_size multipler. ;-)
I think we should try to hit the existing defaults, which would mean we
would use this computation:(shared_buffers / 4) / max_connections + 768k / BUFSZ
This would give us for a default 128MB shared buffers and 100
max_connections:(16384 / 4) / 100 + (768 * 1024) / 8192
which gives us 136, and that is 136 * 8192 or 1088k, close to 1MB.
For 10x shared buffers, 163840, it gives a work_mem of 4040k, rather
than the 10M I was computing in the original patch.How is that?
In summary, that would be 615MB for shared_buffers of 2GB, assuming one
work_mem per session, and assuming you are running the maximum number of
sessions, which you would not normally do.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/09/2013 07:58 AM, Bruce Momjian wrote:
But it still is an independent parameter. I am just changing the default.
maintenance_work_mem can depend on work_mem ~ work_mem * 1 * max_connection / 4
That is kind of hard to do because we would have to figure out if the
old maintenance_work_mem was set from a default computation or by the
user.FYI, this auto-tuning is not for us, who understand the parameters and
how they interact, but for the 90% of our users who would benefit from
better defaults. It is true that there might now be cases where you
would need to _reduce_ work_mem from its default, but I think the new
computed default will be better for most users.
Just to step in here as a consultant. Bruce is right on here. Autotuning
has nothing to do with us, it has to do with Rails developers who deploy
PostgreSQL and known nothing of it except what ActiveRecord tells them
(I am not being rude here).
We could argue all day what the best equation is for this, the key is to
pick something reasonable, not perfect.
Joshua D. Drake
--
Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
a rose in the deeps of my heart. - W.B. Yeats
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers