initdb profiles
One regular topic of conversation on IRC and elsewhere is that the
settings initdb installs especially for memory use, connections, and so
on, are often very conservative. Of course, we tell people how to tune
them to some extent, although performance tuning seems to remain a black
art. But I wondered if it might not be a good idea to allow an option to
initdb which would provide a greater possible range of settings for
max_connections, shared_buffers and so on. For example, we might offer a
profile which is very conservative for memory bound machines, medium
size for a development platform, large for a server running with other
server processes, and huge for a decdicated box, and then provide some
heuristics that initdb could apply. We'd have to let all of these
degrade nicely, so that even if the user select the machine hog setting,
if we find we can only do something like the tiny setting that's what
s/he would get. Also, we might need to have some tolerably portable way
of finding out about machine resources. And power users will still want
to tube things more. But it might help to alleviate our undeserved
reputation for poor performance if we provide some help to start off at
least in the right ballpark.
thoughts?
cheers
andrew
Andrew Dunstan wrote:
But I wondered if it might not be a good idea to allow
an option to initdb which would provide a greater possible range of
settings for max_connections, shared_buffers and so on. For example,
we might offer a profile which is very conservative for memory bound
That reminds me of an identical proposal that was rejected years ago...
machines, medium size for a development platform, large for a server
running with other server processes, and huge for a decdicated box,
and then provide some heuristics that initdb could apply. We'd have
And before long we'll have 750 profiles...
to let all of these degrade nicely, so that even if the user select
the machine hog setting, if we find we can only do something like the
tiny setting that's what s/he would get. Also, we might need to have
And degrading nicely was a feature that we removed a long time ago. Now
you get what you ask for.
some tolerably portable way of finding out about machine resources.
And that doesn't exist.
And power users will still want to tube things more. But it might
help to alleviate our undeserved reputation for poor performance if
we provide some help to start off at least in the right ballpark.
And mind reading devices are not yet available.
So it's doesn't look all that good.
All jokes aside, tuning aids are surely needed, but letting initdb guess
the required profile isn't going to do it.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Peter Eisentraut wrote:
All jokes aside, tuning aids are surely needed, but letting initdb guess
the required profile isn't going to do it.
The idea was in fact to allow the user to provide additional information
to allow initdb to make better guesses than it currently does.
cheers
andrew
Andrew Dunstan wrote:
The idea was in fact to allow the user to provide additional
information to allow initdb to make better guesses than it currently
does.
There's certainly going to be opposition to making initdb an interactive
tool.
The other problem is that no one has ever managed to show that it is
possible to derive reasonable settings from a finite set of questions
presented to the user, plus perhaps from a reasonably portable system
analysis. If you can do that, that would be a cool tool in its own
right. And then you could call that from initdb or not depending on
taste.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Peter Eisentraut <peter_e@gmx.net> writes:
All jokes aside, tuning aids are surely needed, but letting initdb guess
the required profile isn't going to do it.
initdb is really the wrong place for this anyway, because in many
situations (RPM installations for instance) initdb is run behind the
scenes with no opportunity for user interaction. We should be doing
our best to remove options from initdb, not add them.
I think Andrew has a good point that we need to work more on making
configuration tuning easier ... but initdb isn't the place.
regards, tom lane
Tom Lane wrote:
Peter Eisentraut <peter_e@gmx.net> writes:
All jokes aside, tuning aids are surely needed, but letting initdb guess
the required profile isn't going to do it.initdb is really the wrong place for this anyway, because in many
situations (RPM installations for instance) initdb is run behind the
scenes with no opportunity for user interaction. We should be doing
our best to remove options from initdb, not add them.I think Andrew has a good point that we need to work more on making
configuration tuning easier ... but initdb isn't the place.
I accept the "run from init.d" argument. So then, is there a case for
increasing the limits that initdb works with, to reflect the steep rise
we have seen in typically available memory at the low end? We currently
try {100, 50, 40, 30, 20, 10} for connections and {1000, 900, 800, 700,
600, 500, 400, 300, 200, 100, 50} for buffers. I think there's arguably
a good case for trying numbers several (4 maybe?) times this large in
both categories. Out own docs state that the default number of shared
buffers is low for efficient use, and it would be nice to try to allow
one connection per standard allowed apache client (default is 256
non-threaded and 400 threaded, I think).
cheers
andrew
Andrew Dunstan <andrew@dunslane.net> writes:
I accept the "run from init.d" argument. So then, is there a case for
increasing the limits that initdb works with, to reflect the steep rise
we have seen in typically available memory at the low end?
I can't see any particular harm in having initdb try somewhat-larger
values ... but how far does that really go towards fixing the issues?
Personally, the default value I currently see as far too tight is
max_fsm_pages. I'd rather see initdb trying to push that up if it's
able to establish shared_buffers and max_connections at their current
maxima.
... it would be nice to try to allow
one connection per standard allowed apache client (default is 256
non-threaded and 400 threaded, I think).
That's a mostly independent consideration, but it seems fair enough.
Can we check the exact values rather than relying on "I think"?
regards, tom lane
Andrew Dunstan wrote:
I accept the "run from init.d" argument. So then, is there a case for
increasing the limits that initdb works with, to reflect the steep
rise we have seen in typically available memory at the low end?
There is a compromise that I think we cannot make. For production
deployment, shared buffers are typically sized at about 10% to 25% of
available phyiscal memory. I don't think we want to have a default
installation of PostgreSQL that takes 10% or more of memory just like
that. It just doesn't look good.
So the question whether initdb should by default consider up to 1000 or
up to 4000 buffers is still worth discussion, but doesn't solve the
tuning issue to a reasonable degree.
What I would like to see is that initdb would end with saying that the
system is not really tuned and that I should run pg-some-program to
improve that. pg-some-program would analyze my system, ask me a few
questions, and then output a suggested configuration (or apply it right
away). Again, the challenge is to write that program.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
heuristics that initdb could apply. We'd have to let all of these
degrade nicely, so that even if the user select the machine hog setting,
if we find we can only do something like the tiny setting that's what
s/he would get. Also, we might need to have some tolerably portable way
of finding out about machine resources. And power users will still want
to tube things more. But it might help to alleviate our undeserved
reputation for poor performance if we provide some help to start off at
least in the right ballpark.
I think we should just do what MySQL does and include:
postgresql.conf
postgresql-large.conf
postgresql-huge.conf
Chris
What I would like to see is that initdb would end with saying that the
system is not really tuned and that I should run pg-some-program to
improve that. pg-some-program would analyze my system, ask me a few
questions, and then output a suggested configuration (or apply it right
away). Again, the challenge is to write that program.
Perhaps at the end of initdb it would say would you like
to run the PostgreSQL configuration program?
Which would be a wizard that would ask 10 or so questions
and automatically configure us based on those questions?
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
"Joshua D. Drake" <jd@commandprompt.com> writes:
What I would like to see is that initdb would end with saying that the
system is not really tuned and that I should run pg-some-program to
improve that.Perhaps at the end of initdb it would say would you like
to run the PostgreSQL configuration program?
You're both assuming that the output of initdb goes someplace other
than /dev/null ...
I do agree with trying to create a "configuration wizard" program,
but I think having initdb advertise it will be of only marginal use.
regards, tom lane
Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
I accept the "run from init.d" argument. So then, is there a case for
increasing the limits that initdb works with, to reflect the steep rise
we have seen in typically available memory at the low end?I can't see any particular harm in having initdb try somewhat-larger
values ... but how far does that really go towards fixing the issues?Personally, the default value I currently see as far too tight is
max_fsm_pages. I'd rather see initdb trying to push that up if it's
able to establish shared_buffers and max_connections at their current
maxima.
Ok. how would the logic go? Just have a function that runs max_fsm_pages
checks after we call test_connections() and test_buffers(), or should
there be some interplay between those settings? As I understand it, the
current setting would consume all of 120,000 bytes of shared memory, so
there could well be lots of head room.
... it would be nice to try to allow
one connection per standard allowed apache client (default is 256
non-threaded and 400 threaded, I think).That's a mostly independent consideration, but it seems fair enough.
Can we check the exact values rather than relying on "I think"?
That's my reading of
http://httpd.apache.org/docs/2.0/mod/mpm_common.html#maxclients
Peter Eisentraut <peter_e@gmx.net> writes:
There is a compromise that I think we cannot make. For production
deployment, shared buffers are typically sized at about 10% to 25% of
available phyiscal memory. I don't think we want to have a default
installation of PostgreSQL that takes 10% or more of memory just like
that. It just doesn't look good.
The fundamental issue there is "box dedicated to (one instance of)
Postgres" versus "box serves multiple uses". If you don't know what
fraction of the machine resources you're supposed to take up, it's
difficult to be very smart. I think that we have to default to a
socially friendly "don't eat the whole box" position ...
regards, tom lane
Peter Eisentraut wrote:
Andrew Dunstan wrote:
I accept the "run from init.d" argument. So then, is there a case for
increasing the limits that initdb works with, to reflect the steep
rise we have seen in typically available memory at the low end?There is a compromise that I think we cannot make. For production
deployment, shared buffers are typically sized at about 10% to 25% of
available phyiscal memory. I don't think we want to have a default
installation of PostgreSQL that takes 10% or more of memory just like
that. It just doesn't look good.
I have a single instance of apache running on this machine. It's not
doing anything, but even so it's consuming 20% of physical memory. By
contrast, my 3 postmasters are each consuming 0.5% of memory. All with
default settings. I don't think we are in any danger of looking bad for
being greedy. If anything we are in far greater danger of looking bad
from being far too conservative and paying a performance price for that.
There's nothing magical about the numbers we use.
So the question whether initdb should by default consider up to 1000 or
up to 4000 buffers is still worth discussion, but doesn't solve the
tuning issue to a reasonable degree.
True, but that doesn't mean it's not worth doing anyway.
cheers
andrew
Hello All,
Please allow me to put a disclaimer, I am no serious PG hacker,
but would it be possible to allow for a simple config script to be run
(which would work even via /etc/init.d) which could be used to generate a
config file for initdb, which initdb could read and do its thing ?
This script could say do you wish to do a manual adjustment or
accept the default values, and then initdb could feed off that file. Does
this create too much work or is it disadvantageous.
Cheers,
Aly.
On Wed, 7 Sep 2005, Andrew Dunstan wrote:
Peter Eisentraut wrote:
Andrew Dunstan wrote:
I accept the "run from init.d" argument. So then, is there a case for
increasing the limits that initdb works with, to reflect the steep
rise we have seen in typically available memory at the low end?There is a compromise that I think we cannot make. For production
deployment, shared buffers are typically sized at about 10% to 25% of
available phyiscal memory. I don't think we want to have a default
installation of PostgreSQL that takes 10% or more of memory just like that.
It just doesn't look good.I have a single instance of apache running on this machine. It's not doing
anything, but even so it's consuming 20% of physical memory. By contrast, my 3
postmasters are each consuming 0.5% of memory. All with default settings. I
don't think we are in any danger of looking bad for being greedy. If anything
we are in far greater danger of looking bad from being far too conservative
and paying a performance price for that. There's nothing magical about the
numbers we use.So the question whether initdb should by default consider up to 1000 or up
to 4000 buffers is still worth discussion, but doesn't solve the tuning
issue to a reasonable degree.True, but that doesn't mean it's not worth doing anyway.
cheers
andrew
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
--
Aly S.P Dharshi
aly.dharshi@telus.net
"A good speech is like a good dress
that's short enough to be interesting
and long enough to cover the subject"
On Thu, Sep 08, 2005 at 09:54:59AM +0800, Christopher Kings-Lynne wrote:
I think we should just do what MySQL does and include:
postgresql.conf
postgresql-large.conf
postgresql-huge.conf
I do that, in the package of PG I distribute with my application. I
tell the user that they should use it in the installation documentation,
as part of the installation script, in the performance tuning documentation
in the maintenance documentation, in the runbook, on the website and
in the applications online help.
I also mention it to customers by phone, email and on occasion IRC or IM
when they're about to install it or have just installed it. I also mention
it to them any time they call about performance problems.
These are technically literate customers working for large ISPs, with
significant local sysadmin and DBA support, so the concept is not beyond them.
Yet when I ssh in to one of their servers only about 1 in 3 is running
with anything other than the default postgresql.conf.
Just a data point. If it works, most people won't think to fix it.
Cheers,
Steve
Steve Atkins wrote:
These are technically literate customers working for large ISPs, with
significant local sysadmin and DBA support, so the concept is not beyond them.Yet when I ssh in to one of their servers only about 1 in 3 is running
with anything other than the default postgresql.conf.Just a data point. If it works, most people won't think to fix it.
That's why I think we need a range of measures, one of which would be to
use better defaults in initdb. We are using the same numbers that were
there at least 2 releases ago, IIRC. But the world has changed some in
that time.
cheers
andrew
On 2005-09-08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
initdb is really the wrong place for this anyway, because in many
situations (RPM installations for instance) initdb is run behind the
scenes with no opportunity for user interaction. We should be doing
our best to remove options from initdb, not add them.
Running initdb behind the scenes is a proven dangerous practice; why
encourage it?
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
Folks,
Help on the Configurator is actively solicited. I really think this is a
better solution for this problem.
http://www.pgfoundry.org/projects/configurator
--
Josh Berkus
Aglio Database Solutions
San Francisco
Josh Berkus wrote:
Folks,
Help on the Configurator is actively solicited. I really think this is a
better solution for this problem.
http://www.pgfoundry.org/projects/configurator
I don't agree, for several reasons.
1. Steve has already told us most of his clients just go with the defaults
2. We don't have to pick a winner; improving initdb wouldn't obviate the
need for configurator
3. It's a cop-out. I think there's a reasoable expectation that we will
by default use some settings that work reasonably in typical cases.
Inviting people to use an add-on tool to tune postgres after initdb is
the sort of thing that gets postgres a bad name. We need to find some
sort of sweet spot between being machine hogs and being so conservative
that out of the box we run like a dog for typical users. Initdb already
has adaptive rules - look at the source - and Tom suggests adding
another set for max_fsm_pages. All I'm doing is to suggest that we need
to tweak those.
cheers
andrew
Andrew - Supernews wrote:
Running initdb behind the scenes is a proven dangerous practice
Please elaborate.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Andrew Dunstan wrote:
I have a single instance of apache running on this machine. It's not
doing anything, but even so it's consuming 20% of physical memory. By
contrast, my 3 postmasters are each consuming 0.5% of memory. All
If I see this right, my Apache, running at default settings, uses only
3% of memory and PostgreSQL uses about 2%, so whoever made up the
default settings for this machine would probably think we are currently
about in the right ballpark.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Andrew - Supernews <andrew+nonews@supernews.com> writes:
On 2005-09-08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
initdb is really the wrong place for this anyway, because in many
situations (RPM installations for instance) initdb is run behind the
scenes with no opportunity for user interaction. We should be doing
our best to remove options from initdb, not add them.
Running initdb behind the scenes is a proven dangerous practice; why
encourage it?
I don't see anything particularly dangerous about it.
regards, tom lane
Christian,
Regarding Configurator, has anything been done yet, or is it in the
planning stage?
Yes, I have a spreadsheet mapping the values we want to configure for 8.0.
Dave Cramer has done a partial implementation in Java using Drools; the
perl implementation is lagging rather further behind.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
Import Notes
Reply to msg id not found: 1126201885.5841.37.camel@dell.1006.org
On 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
Running initdb behind the scenes is a proven dangerous practice
Please elaborate.
Example instance:
http://archives.postgresql.org/pgsql-hackers/2004-12/msg00851.php
More generally, you risk running initdb and doing a normal database
startup despite missing filesystems (assuming your db is substantial
and important enough that you don't keep in it /var or /usr). There are
a number of ways that this can bite you, whether due to thinking that
the database is up when it really isn't usable, or subsequently mounting
over the new data dir, or any number of other potential issues.
A missing data directory on startup should mean "something is wrong", not
"oh look, I'll run initdb and start up anyway".
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
Andrew - Supernews wrote:
On 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
Running initdb behind the scenes is a proven dangerous practice
Please elaborate.
Example instance:
http://archives.postgresql.org/pgsql-hackers/2004-12/msg00851.php
If you run your database on NFS, your warranty is void.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
On 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
On 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
Running initdb behind the scenes is a proven dangerous practice
Please elaborate.
Example instance:
http://archives.postgresql.org/pgsql-hackers/2004-12/msg00851.phpIf you run your database on NFS, your warranty is void.
NFS has nothing to do with it.
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
-----Original Message-----
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of
Andrew - Supernews
Sent: 09 September 2005 08:16
To: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] initdb profilesOn 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
On 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
Running initdb behind the scenes is a proven dangerous practice
Please elaborate.
Example instance:
http://archives.postgresql.org/pgsql-hackers/2004-12/msg00851.phpIf you run your database on NFS, your warranty is void.
NFS has nothing to do with it.
Well, it sorta did in that case, but I see where you're coming from.
What does have something to do with it is that iirc it was the rc.pgsql
script that ran initdb in the background at boot time when it didn't
find the data directory, so perhaps your statement would be more
accurate as:
"Automatically running initdb behind the scenes at system startup is a
proven dangerous practice"
We've distributed hundreds of thousands of copies of pgInstaller which
initdb's behind the scenes and never had any reported problems.
Regards, Dave
Import Notes
Resolved by subject fallback
Dave Page wrote:
perhaps your statement would be more
accurate as:"Automatically running initdb behind the scenes at system startup is a
proven dangerous practice"We've distributed hundreds of thousands of copies of pgInstaller which
initdb's behind the scenes and never had any reported problems.
And anyway you need to come up with a reasonable alternative for
packagers, rather than just say "don't do this.". The only one I can
think of is to run initdb as part of a package postinstall, although
packagers and especially distro preparers might find that more than
something of a nuisance.
cheers
andrew
On Thu, Sep 08, 2005 at 08:29:38PM -0000, Andrew - Supernews wrote:
On 2005-09-08, Peter Eisentraut <peter_e@gmx.net> wrote:
Andrew - Supernews wrote:
Running initdb behind the scenes is a proven dangerous practice
Please elaborate.
Example instance:
http://archives.postgresql.org/pgsql-hackers/2004-12/msg00851.phpMore generally, you risk running initdb and doing a normal database
startup despite missing filesystems (assuming your db is substantial
and important enough that you don't keep in it /var or /usr). There are
a number of ways that this can bite you, whether due to thinking that
the database is up when it really isn't usable, or subsequently mounting
over the new data dir, or any number of other potential issues.A missing data directory on startup should mean "something is wrong", not
"oh look, I'll run initdb and start up anyway".
I think we read entirely different things into "behind the scenes".
I have an installer script that's run to install a software
package. It runs an initdb to create the database it uses behind the
scenes.
Running initdb as part of an installation process is a very different
scenario to randomly running it whenever you think something may not
be quite right (although that is the pattern used by many other
daemon startup scripts on the OS in question, so it's at least
consistent).
Cheers,
Steve
On Thu, Sep 08, 2005 at 03:43:17AM +0200, Peter Eisentraut wrote:
What I would like to see is that initdb would end with saying that the
system is not really tuned and that I should run pg-some-program to
improve that. pg-some-program would analyze my system, ask me a few
questions, and then output a suggested configuration (or apply it right
away). Again, the challenge is to write that program.
FYI, http://pgfoundry.org/projects/configurator/
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Jim C. Nasby wrote:
On Thu, Sep 08, 2005 at 03:43:17AM +0200, Peter Eisentraut wrote:
What I would like to see is that initdb would end with saying that the
system is not really tuned and that I should run pg-some-program to
improve that. pg-some-program would analyze my system, ask me a few
questions, and then output a suggested configuration (or apply it right
away). Again, the challenge is to write that program.
Jim,
it's been referred to several times in this debate.
It might do what Peter wants, but for reasons already explained I don't
see it as a substitute for getting initdb to generate more realistic
settings.
cheers
andrew
On Thursday 08 September 2005 13:16, Andrew Dunstan wrote:
Initdb already
has adaptive rules - look at the source - and Tom suggests adding
another set for max_fsm_pages. All I'm doing is to suggest that we need
to tweak those.
I'm curious how this could work... istm its fairly hard to predict a
reasonable value for max_fsm_pages before you ever create a single database
in your cluster... I could perhaps see this as becoming self tuning with an
integrated autovacuum, where autovacuum stores the values needed for this and
max_fsm_relations so that upon any restart these values get automatically
updated to a reasonable number. Even more ideal would be to not have them
require restart at all, but thats yet another level of magic...
--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
Robert Treat <xzilla@users.sourceforge.net> writes:
On Thursday 08 September 2005 13:16, Andrew Dunstan wrote:
Initdb already
has adaptive rules - look at the source - and Tom suggests adding
another set for max_fsm_pages. All I'm doing is to suggest that we need
to tweak those.
I'm curious how this could work... istm its fairly hard to predict a
reasonable value for max_fsm_pages before you ever create a single database
in your cluster... I could perhaps see this as becoming self tuning with an
integrated autovacuum, where autovacuum stores the values needed for this and
max_fsm_relations so that upon any restart these values get automatically
updated to a reasonable number. Even more ideal would be to not have them
require restart at all, but thats yet another level of magic...
It'd be nice to get out from under the fixed-size-shmem restriction, but
I don't know any very portable way to do that. In the meantime, trying
to automatically change the size parameters as above seems a tad
dangerous. What if they get updated to values that prevent the
postmaster from starting because it exceeds SHMMAX? Failing during
initdb is one thing, but not coming back after a restart is bad.
The thought behind my suggestion was that the current max_fsm_pages
default of 20000 pages is enough to track free space in a database of
maybe a few hundred megabytes. The other defaults are sized
appropriately for machines with about that much in main memory. This
doesn't seem to add up :-(. The default max_fsm_pages probably should
be about ten times bigger just to bring it in balance with the other
defaults ... after that we could talk about increasing the defaults
across-the-board.
regards, tom lane
Andrew Dunstan wrote:
And anyway you need to come up with a reasonable alternative for
packagers, rather than just say "don't do this.". The only one I can
think of is to run initdb as part of a package postinstall, although
packagers and especially distro preparers might find that more than
something of a nuisance.
I think running initdb in the package postinstall is the best place, and
many packages already do that. I gather from this discussion that some
packages run initdb from the init script, and that the concerns that
have been expressed are especially about that setup, and I certainly
agree that this is not the optimal arrangement.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Tom Lane <tgl@sss.pgh.pa.us> writes:
It'd be nice to get out from under the fixed-size-shmem restriction, but
I don't know any very portable way to do that.
Without knowing that part of the code at all it seems to me the logical
approach would be to make the fsm steal its pages out of the shared buffers
allocation. That is, you specify a total amount of shared memory to allocate
and Postgres decides how much of it to use for shared buffers and how much for
fsm.
--
greg
Tom Lane wrote:
The thought behind my suggestion was that the current max_fsm_pages
default of 20000 pages is enough to track free space in a database of
maybe a few hundred megabytes. The other defaults are sized
appropriately for machines with about that much in main memory. This
doesn't seem to add up :-(. The default max_fsm_pages probably should
be about ten times bigger just to bring it in balance with the other
defaults ... after that we could talk about increasing the defaults
across-the-board.
Ok, how about this? I based the numbers on your 10*current suggestion
and some linear scaling
When we test connection currently, we use shared bufferes if n*5. We
could add in a setting of max_fsm_pages = n * 1000 in line with the
arithmetic - not sure if it's worth it.
When we test n shared buffers, let's add in a max_fsm_pages setting of n
* 200.
Another alternative I thought might be better would be that instead of
fixing the default max_fsm_pages at 20000, we set the default at a fixed
ratio (say 200:1) to shared_buffers. Not sure how easy that is to do via
the GUC mechanism.
Lastly, I would suggest that we increase the limits we try modestly -
adding in 400, 350, 300, 250, 200, and 150 to the number of cennections
tried, and perhaps 3000, 2500, 2000 and 1500 to number of buffers tried.
These numbers aren't entirely plucked out of the air. The number of
connections is picked to match then number of clients a default apache
setup can have under a hybrid MPM, and the number of shared buffers is
picked to be somewhat less than the 10% on a modest machine that Peter
thought would be too much.
cheers
andrew
On Sun, Sep 11, 2005 at 12:15:01PM -0400, Greg Stark wrote:
It'd be nice to get out from under the fixed-size-shmem restriction, but
I don't know any very portable way to do that.Without knowing that part of the code at all it seems to me the logical
approach would be to make the fsm steal its pages out of the shared buffers
allocation. That is, you specify a total amount of shared memory to allocate
and Postgres decides how much of it to use for shared buffers and how much for
fsm.
FWIW, I know this is how DB2 does things, and I think Oracle's the same.
We probably still want some kind of limit so it doesn't blow the buffer
cache completely out.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461