Shared Buffer Size

Started by preetika tyagialmost 15 years ago13 messagesgeneral
Jump to latest
#1preetika tyagi
preetikatyagi@gmail.com

Hi All,

I am little confused about the internal working of PostgreSQL. There is a
parameter shared_buffer in postgres.conf and I am assuming that it is used
for buffer management in PostgreSQL. If there is a need to bring in a new
page in the buffer and size exceeds the shared_buffer limit, a victim dirty
page will be written back to the disk.

However, I have read on many links that PostgreSQL depends on the OS for
caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)

So my question is, the actual limit of the shared buffer will be defined by
OS or the shared_buffer parameter in the postgres.conf to figure whether a
victim dirty page needs to be selected for disk write or not?

Thanks!

#2Derrick Rice
derrick.rice@gmail.com
In reply to: preetika tyagi (#1)
Re: Shared Buffer Size

Check out the "Inside the PostgreSQL Buffer Cache" link here:

http://projects.2ndquadrant.com/talks

Thanks to Greg Smith (active here).

Derrick

On Fri, May 27, 2011 at 3:36 PM, preetika tyagi <preetikatyagi@gmail.com>wrote:

Show quoted text

Hi All,

I am little confused about the internal working of PostgreSQL. There is a
parameter shared_buffer in postgres.conf and I am assuming that it is used
for buffer management in PostgreSQL. If there is a need to bring in a new
page in the buffer and size exceeds the shared_buffer limit, a victim dirty
page will be written back to the disk.

However, I have read on many links that PostgreSQL depends on the OS for
caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)

So my question is, the actual limit of the shared buffer will be defined by
OS or the shared_buffer parameter in the postgres.conf to figure whether a
victim dirty page needs to be selected for disk write or not?

Thanks!

#3preetika tyagi
preetikatyagi@gmail.com
In reply to: Derrick Rice (#2)
Re: Shared Buffer Size

Hi Derrick,

Thank you for your response.
I saw this document and trying to understand "Interaction with the Operating
System Cache" which is mentioned in this document.

I have the following question-
Hows does the shared buffer in Postgres rely on the Operating System cache?
Suppose my RAM is 8 GB and shared_buffer is 24 MB in postgres. And there are
some dirty pages in shared_buffer and I need to write a dirty page back to
the disk to bring in a new page. What happens in this case? The dirty page
will be written to the disk considering the shared_buffer size as 24 MB? or
it will not be written and will stay in RAM which is 8 GB?

Thanks,
Preetika

On Fri, May 27, 2011 at 2:11 PM, Derrick Rice <derrick.rice@gmail.com>wrote:

Show quoted text

Check out the "Inside the PostgreSQL Buffer Cache" link here:

http://projects.2ndquadrant.com/talks

Thanks to Greg Smith (active here).

Derrick

On Fri, May 27, 2011 at 3:36 PM, preetika tyagi <preetikatyagi@gmail.com>wrote:

Hi All,

I am little confused about the internal working of PostgreSQL. There is a
parameter shared_buffer in postgres.conf and I am assuming that it is used
for buffer management in PostgreSQL. If there is a need to bring in a new
page in the buffer and size exceeds the shared_buffer limit, a victim dirty
page will be written back to the disk.

However, I have read on many links that PostgreSQL depends on the OS for
caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)

So my question is, the actual limit of the shared buffer will be defined
by OS or the shared_buffer parameter in the postgres.conf to figure whether
a victim dirty page needs to be selected for disk write or not?

Thanks!

#4Carl von Clausewitz
clausewitz45@gmail.com
In reply to: preetika tyagi (#3)
Re: Shared Buffer Size

Hi Preetika,

a few months ago, when I installed my first PostgreSQL, I have had the same
problem. I've try to get any information about optimal memory config, and
working, but there wasn't any "optimal memory setting calculator" on the
internet, just some guide in the posgre documentation (
http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting in
postgresql.conf (which are not the default):

listen_addresses = '192.168.1.1' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 200 # (There are 20 user, with Microsoft Access client and
ODBC connections... (min 6 connection / user))

shared_buffers = 1900MB # min 128kB
temp_buffers = 64MB # min 800kB
work_mem = 64MB # min 64kB
maintenance_work_mem = 1024MB # min 1MB
max_stack_depth = 64MB # min 100kB

shared_preload_libraries = '$libdir/plpgsql.so' # (change requires restart)

checkpoint_segments = 32 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 15min # range 30s-1h
checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0

effective_cache_size = 4096MB

track_activities = on
track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)

update_process_title = off
deadlock_timeout = 1s
max_locks_per_transaction = 256 # min 10

And the sysctl.conf from BSD, which are relevant for theese postgre
settings:
kern.ipc.shmall=524288
kern.ipc.shmmax=2147483648
kern.ipc.semmap=512
kern.ipc.shm_use_phys=1

And the last one is the loader.conf from BSD, which are relevant for theese
postgre settings:
kern.ipc.semmni=512
kern.ipc.semmns=1024
kern.ipc.semmnu=512

Theese settings based on my experience, with lot of reboot and restart and
reload config - I hope this can help you, and I accept any comment, if I
need to set everything else :-)

Thanks,
Carl
2011/5/27 preetika tyagi <preetikatyagi@gmail.com>

Show quoted text

Hi Derrick,

Thank you for your response.
I saw this document and trying to understand "Interaction with the
Operating System Cache" which is mentioned in this document.

I have the following question-
Hows does the shared buffer in Postgres rely on the Operating System
cache?
Suppose my RAM is 8 GB and shared_buffer is 24 MB in postgres. And there
are some dirty pages in shared_buffer and I need to write a dirty page back
to the disk to bring in a new page. What happens in this case? The dirty
page will be written to the disk considering the shared_buffer size as 24
MB? or it will not be written and will stay in RAM which is 8 GB?

Thanks,
Preetika

On Fri, May 27, 2011 at 2:11 PM, Derrick Rice <derrick.rice@gmail.com>wrote:

Check out the "Inside the PostgreSQL Buffer Cache" link here:

http://projects.2ndquadrant.com/talks

Thanks to Greg Smith (active here).

Derrick

On Fri, May 27, 2011 at 3:36 PM, preetika tyagi <preetikatyagi@gmail.com>wrote:

Hi All,

I am little confused about the internal working of PostgreSQL. There is a
parameter shared_buffer in postgres.conf and I am assuming that it is used
for buffer management in PostgreSQL. If there is a need to bring in a new
page in the buffer and size exceeds the shared_buffer limit, a victim dirty
page will be written back to the disk.

However, I have read on many links that PostgreSQL depends on the OS for
caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)

So my question is, the actual limit of the shared buffer will be defined
by OS or the shared_buffer parameter in the postgres.conf to figure whether
a victim dirty page needs to be selected for disk write or not?

Thanks!

#5preetika tyagi
preetikatyagi@gmail.com
In reply to: Carl von Clausewitz (#4)
Re: Shared Buffer Size

Hi Carl,

Thank you for your response, however, I am trying to understand the role of
shared_buffer. I notice you have increased this value in your settings.

I am just trying to understand the meaning of this statement-
"PostgreSQL depends on the OS for caching. (
http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)&quot;

My question is how does it rely? What is happening internally? It would be
great if someone could explain with a simple example.

Thanks!

On Sat, May 28, 2011 at 1:42 AM, Carl von Clausewitz <clausewitz45@gmail.com

Show quoted text

wrote:

Hi Preetika,

a few months ago, when I installed my first PostgreSQL, I have had the same
problem. I've try to get any information about optimal memory config, and
working, but there wasn't any "optimal memory setting calculator" on the
internet, just some guide in the posgre documentation (
http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting in
postgresql.conf (which are not the default):

listen_addresses = '192.168.1.1' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 200 # (There are 20 user, with Microsoft Access client
and ODBC connections... (min 6 connection / user))

shared_buffers = 1900MB # min 128kB
temp_buffers = 64MB # min 800kB
work_mem = 64MB # min 64kB
maintenance_work_mem = 1024MB # min 1MB
max_stack_depth = 64MB # min 100kB

shared_preload_libraries = '$libdir/plpgsql.so' # (change requires
restart)

checkpoint_segments = 32 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 15min # range 30s-1h
checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0

effective_cache_size = 4096MB

track_activities = on
track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)

update_process_title = off
deadlock_timeout = 1s
max_locks_per_transaction = 256 # min 10

And the sysctl.conf from BSD, which are relevant for theese postgre
settings:
kern.ipc.shmall=524288
kern.ipc.shmmax=2147483648
kern.ipc.semmap=512
kern.ipc.shm_use_phys=1

And the last one is the loader.conf from BSD, which are relevant for theese
postgre settings:
kern.ipc.semmni=512
kern.ipc.semmns=1024
kern.ipc.semmnu=512

Theese settings based on my experience, with lot of reboot and restart and
reload config - I hope this can help you, and I accept any comment, if I
need to set everything else :-)

Thanks,
Carl

2011/5/27 preetika tyagi <preetikatyagi@gmail.com>

Hi Derrick,

Thank you for your response.
I saw this document and trying to understand "Interaction with the
Operating System Cache" which is mentioned in this document.

I have the following question-
Hows does the shared buffer in Postgres rely on the Operating System
cache?
Suppose my RAM is 8 GB and shared_buffer is 24 MB in postgres. And there
are some dirty pages in shared_buffer and I need to write a dirty page back
to the disk to bring in a new page. What happens in this case? The dirty
page will be written to the disk considering the shared_buffer size as 24
MB? or it will not be written and will stay in RAM which is 8 GB?

Thanks,
Preetika

On Fri, May 27, 2011 at 2:11 PM, Derrick Rice <derrick.rice@gmail.com>wrote:

Check out the "Inside the PostgreSQL Buffer Cache" link here:

http://projects.2ndquadrant.com/talks

Thanks to Greg Smith (active here).

Derrick

On Fri, May 27, 2011 at 3:36 PM, preetika tyagi <preetikatyagi@gmail.com

wrote:

Hi All,

I am little confused about the internal working of PostgreSQL. There is
a parameter shared_buffer in postgres.conf and I am assuming that it is used
for buffer management in PostgreSQL. If there is a need to bring in a new
page in the buffer and size exceeds the shared_buffer limit, a victim dirty
page will be written back to the disk.

However, I have read on many links that PostgreSQL depends on the OS for
caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)

So my question is, the actual limit of the shared buffer will be defined
by OS or the shared_buffer parameter in the postgres.conf to figure whether
a victim dirty page needs to be selected for disk write or not?

Thanks!

#6Greg Smith
gsmith@gregsmith.com
In reply to: Carl von Clausewitz (#4)
Re: Shared Buffer Size

On 05/28/2011 04:42 AM, Carl von Clausewitz wrote:

I've try to get any information about optimal memory config, and
working, but there wasn't any "optimal memory setting calculator" on
the internet, just some guide in the posgre documentation

There's a simple one at https://github.com/gregs1104/pgtune and the
guide at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
covers many of the most popular questions here too. The right
proportion of memory setting is very dependent on workload, which makes
any sort of calculator hard to create. pgtune takes a workload type as
an input to help with that, but the settings that come out should be
considered starting values only. You'll need to monitoring how much
memory is actually being used by the server, as well as the output from
parameters like log_time_files, to know for sure if things are working well.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

#7Greg Smith
gsmith@gregsmith.com
In reply to: preetika tyagi (#3)
Re: Shared Buffer Size

On 05/27/2011 05:33 PM, preetika tyagi wrote:

Hows does the shared buffer in Postgres rely on the Operating System
cache?
Suppose my RAM is 8 GB and shared_buffer is 24 MB in postgres. And
there are some dirty pages in shared_buffer and I need to write a
dirty page back to the disk to bring in a new page. What happens in
this case? The dirty page will be written to the disk considering the
shared_buffer size as 24 MB? or it will not be written and will stay
in RAM which is 8 GB?

In this case, the dirty page will be written to the operating system,
which will initially store it in its own write cache. Eventually the
operating system will write that page out to disk from that cache. The
delay before that happens can easily be 30 seconds or more on operating
systems like Linux. If PostgreSQL needs to read that data again, it may
find it still in the OS cache, in which case the read will happen very
quickly. Eventually, the dirty data will be written to disk, and if
it's not used for long enough the OS cache memory will be re-used for
something else.

When you read a page into the database, and you don't dirty it, it might
be evicted from the database cache without a write. If the database
needs that page again, it will ask the OS for it. If the OS still has
it in its own read cache, it may just read it from the cache again,
without a real disk read happening.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

#8Toby Corkindale
toby.corkindale@strategicdata.com.au
In reply to: Carl von Clausewitz (#4)
Re: Shared Buffer Size

On 28/05/11 18:42, Carl von Clausewitz wrote:

a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation
(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):

[snip]

work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB

Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending
on the complexity of your query.

In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times
the value of work_mem, and work_mem had been set to a high value.

Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it
was marginally faster - probably because more memory was left for the
operating system's cache)

Toby

#9Carl von Clausewitz
clausewitz45@gmail.com
In reply to: Toby Corkindale (#8)
Re: Shared Buffer Size

Thanks Toby, I will check it, and change it.

regards,
Carl

2011/5/30 Toby Corkindale <toby.corkindale@strategicdata.com.au>

Show quoted text

On 28/05/11 18:42, Carl von Clausewitz wrote:

a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation
(
http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC
).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):

[snip]

work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB

Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending on
the complexity of your query.

In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times the
value of work_mem, and work_mem had been set to a high value.

Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it was
marginally faster - probably because more memory was left for the operating
system's cache)

Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#10Cédric Villemain
cedric.villemain.debian@gmail.com
In reply to: Toby Corkindale (#8)
Re: Shared Buffer Size

2011/5/30 Toby Corkindale <toby.corkindale@strategicdata.com.au>:

On 28/05/11 18:42, Carl von Clausewitz wrote:

a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation

(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):

[snip]

work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB

Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending on
the complexity of your query.

In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times the
value of work_mem, and work_mem had been set to a high value.

Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it was
marginally faster - probably because more memory was left for the operating
system's cache)

Maybe, you're also aware that linux may decide to swap to protect its
buffer cache (depend of the strategy it got in its configuration) and
also that you may be limited by commitable memory. On a default
install where the swap is NOT at least twice the RAM size, you're not
able to commit all RAM you have. But, it protects the buffer cache for
the not allocatable memory.

So maybe you've hitten a step where you did swap your work_mem...
anyway interesting to have a query where a large work_mem is not
better... Will it be hard to isolate the case and make it public ? In
the long term it might be a good test to add to a performance farm if
it is not based on a non-optimum linux configuration (I mean if the
issue *need* the work_mem to be reduced to be fixed).

Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

#11Toby Corkindale
toby.corkindale@strategicdata.com.au
In reply to: Cédric Villemain (#10)
Re: Shared Buffer Size

On 30/05/11 20:41, C�dric Villemain wrote:

2011/5/30 Toby Corkindale<toby.corkindale@strategicdata.com.au>:

On 28/05/11 18:42, Carl von Clausewitz wrote:

a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation

(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):

[snip]

work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB

Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending on
the complexity of your query.

In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times the
value of work_mem, and work_mem had been set to a high value.

Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it was
marginally faster - probably because more memory was left for the operating
system's cache)

Maybe, you're also aware that linux may decide to swap to protect its
buffer cache (depend of the strategy it got in its configuration) and
also that you may be limited by commitable memory. On a default
install where the swap is NOT at least twice the RAM size, you're not
able to commit all RAM you have. But, it protects the buffer cache for
the not allocatable memory.

So maybe you've hitten a step where you did swap your work_mem...
anyway interesting to have a query where a large work_mem is not
better... Will it be hard to isolate the case and make it public ? In
the long term it might be a good test to add to a performance farm if
it is not based on a non-optimum linux configuration (I mean if the
issue *need* the work_mem to be reduced to be fixed).

In this case, it was not just slowing down due to the amount of work_mem
allocated -- it was exceeding several gigabytes of memory usage and
crashing out. Lower values of work_mem allowed the query to succeed, but
it used almost 3G.. Even lower values of work_mem could do the query in
only a few hundred MB - and was faster.

I note that if you exceed work_mem in a query,then I guess the temp
files created are cached by the VM cache, so it's not like the
performance hit will be *too* bad?

I agree that the slowness of the 3GB version could be due to swapping or
something like that.. or just due to the VM cache being eliminated as I
suggested.

Either way - the problem was that this (machine-generated) query was
pivoting and joining many views-of-views. It's a pretty nasty query.

The key fact is that postgres (8.3) seems to allocate the full work_mem
amount every time it needs *some* work_mem - even if it could have
happily got by on just a few MB. So if your query allocates work_mem a
hundred times, it'll consume $work_mem * 100 -- or die trying.

I'm curious to know if Postgres 9.0 has improved this -- I'm going to
try re-running this query on it once I get a chance, but due to
contractual agreements this isn't quite as simple to test as you might
think.
(And running the test over a much smaller example data set might not
trigger the same query plan)
I'll get there eventually though :)

#12Cédric Villemain
cedric.villemain.debian@gmail.com
In reply to: Toby Corkindale (#11)
Re: Shared Buffer Size

2011/5/31 Toby Corkindale <toby.corkindale@strategicdata.com.au>:

On 30/05/11 20:41, Cédric Villemain wrote:

2011/5/30 Toby Corkindale<toby.corkindale@strategicdata.com.au>:

On 28/05/11 18:42, Carl von Clausewitz wrote:

a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation

(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):

[snip]

work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB

Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending
on
the complexity of your query.

In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times
the
value of work_mem, and work_mem had been set to a high value.

Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it
was
marginally faster - probably because more memory was left for the
operating
system's cache)

Maybe, you're also aware that linux may decide to swap to protect its
buffer cache (depend of the strategy it got in its configuration) and
also that you may be limited by commitable memory. On a default
install where the swap is NOT at least twice the RAM size, you're not
able to commit all RAM you have. But, it protects the buffer cache for
the not allocatable memory.

So maybe you've hitten a step where you did swap your work_mem...
anyway interesting to have a query where a large work_mem is not
better... Will it be hard to isolate the case and make it public ? In
the long term it might be a good test to add to a performance farm if
it is not based on a non-optimum linux configuration (I mean if the
issue *need* the work_mem to be reduced to be fixed).

In this case, it was not just slowing down due to the amount of work_mem
allocated -- it was exceeding several gigabytes of memory usage and crashing
out. Lower values of work_mem allowed the query to succeed, but it used
almost 3G.. Even lower values of work_mem could do the query in only a few
hundred MB - and was faster.

I note that if you exceed work_mem in a query,then I guess the temp files
created are cached by the VM cache, so it's not like the performance hit
will be *too* bad?

correct.

I agree that the slowness of the 3GB version could be due to swapping or
something like that.. or just due to the VM cache being eliminated as I
suggested.

Either way - the problem was that this (machine-generated) query was
pivoting and joining many views-of-views. It's a pretty nasty query.

The key fact is that postgres (8.3) seems to allocate the full work_mem
amount every time it needs *some* work_mem - even if it could have happily
got by on just a few MB. So if your query allocates work_mem a hundred
times, it'll consume $work_mem * 100 -- or die trying.

work_mem is not allocated fully from the beginning. It is allocated
initialy at XXKB (or MB), then doubled each time we are near the end
of the allocated memory. (I am unsure of detail like when exactly we
alloc more mem but the logic is this one)

I'm curious to know if Postgres 9.0 has improved this -- I'm going to try
re-running this query on it once I get a chance, but due to contractual
agreements this isn't quite as simple to test as you might think.
(And running the test over a much smaller example data set might not trigger
the same query plan)
I'll get there eventually though :)

9.0 may help you for generated queries so it is worth testing it I think.

--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

#13Cédric Villemain
cedric.villemain.debian@gmail.com
In reply to: Carl von Clausewitz (#4)
Re: Shared Buffer Size

2011/5/28 Carl von Clausewitz <clausewitz45@gmail.com>:

Hi Preetika,
a few months ago, when I installed my first PostgreSQL, I have had the same
problem. I've try to get any information about optimal memory config, and
working, but there wasn't any "optimal memory setting calculator" on the
internet, just some guide in the posgre documentation
(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting in
postgresql.conf (which are not the default):
listen_addresses = '192.168.1.1' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 200 # (There are 20 user, with Microsoft Access client and
ODBC connections... (min 6 connection / user))
shared_buffers = 1900MB # min 128kB
temp_buffers = 64MB # min 800kB
work_mem = 64MB # min 64kB
maintenance_work_mem = 1024MB # min 1MB
max_stack_depth = 64MB # min 100kB

this max_stack_depth is unsane it should be the result of (ulimit -u)
- 1MB maximum.
Except if you have a kernel build with a stack_depth of 64MB, which
would surprised me. (common kernel have 8MB or 16MB of stack_depth)

shared_preload_libraries = '$libdir/plpgsql.so' # (change requires restart)
checkpoint_segments = 32 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 15min # range 30s-1h
checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
effective_cache_size = 4096MB
track_activities = on
track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
update_process_title = off
deadlock_timeout = 1s
max_locks_per_transaction = 256 # min 10
And the sysctl.conf from BSD, which are relevant for theese postgre
settings:
kern.ipc.shmall=524288
kern.ipc.shmmax=2147483648
kern.ipc.semmap=512
kern.ipc.shm_use_phys=1
And the last one is the loader.conf from BSD, which are relevant for theese
postgre settings:
kern.ipc.semmni=512
kern.ipc.semmns=1024
kern.ipc.semmnu=512
Theese settings based on my experience, with lot of reboot and restart and
reload config - I hope this can help you, and I accept any comment, if I
need to set everything else :-)
Thanks,
Carl
2011/5/27 preetika tyagi <preetikatyagi@gmail.com>

Hi Derrick,
Thank you for your response.
I saw this document and trying to understand "Interaction with the
Operating System Cache" which is mentioned in this document.
I have the following question-
Hows does the shared buffer in Postgres rely on the Operating System
cache?
Suppose my RAM is 8 GB and shared_buffer is 24 MB in postgres. And there
are some dirty pages in shared_buffer and I need to write a dirty page back
to the disk to bring in a new page. What happens in this case? The dirty
page will be written to the disk considering the shared_buffer size as 24
MB? or it will not be written and will stay in RAM which is 8 GB?
Thanks,
Preetika

On Fri, May 27, 2011 at 2:11 PM, Derrick Rice <derrick.rice@gmail.com>
wrote:

Check out the "Inside the PostgreSQL Buffer Cache" link here:

http://projects.2ndquadrant.com/talks

Thanks to Greg Smith (active here).

Derrick

On Fri, May 27, 2011 at 3:36 PM, preetika tyagi <preetikatyagi@gmail.com>
wrote:

Hi All,

I am little confused about the internal working of PostgreSQL. There is
a parameter shared_buffer in postgres.conf and I am assuming that it is used
for buffer management in PostgreSQL. If there is a need to bring in a new
page in the buffer and size exceeds the shared_buffer limit, a victim dirty
page will be written back to the disk.

However, I have read on many links that PostgreSQL depends on the OS for
caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf)

So my question is, the actual limit of the shared buffer will be defined
by OS or the shared_buffer parameter in the postgres.conf to figure whether
a victim dirty page needs to be selected for disk write or not?

Thanks!

--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support