How to find how much postgresql use the memory?

Started by Luki Rustiantoabout 17 years ago5 messagesgeneral
Jump to latest
#1Luki Rustianto
lukirus@gmail.com

Hi all,

I wonder how to find how much postgresql memory consumption is (real, not
allocated) ?

thx.

#2Laurenz Albe
laurenz.albe@cybertec.at
In reply to: Luki Rustianto (#1)
Re: How to find how much postgresql use the memory?

I wonder how to find how much postgresql memory consumption
is (real, not allocated) ?

You could install the pg_buffercache contrib and

SELECT pg_size_pretty(
CAST(current_setting('block_size') AS integer) * max(bufferid)
)
FROM pg_buffercache WHERE relfilenode IS NOT NULL;

to find out how much of the shared memory in the buffer pool is
actually in use.

There is some extra shared memory that is used for other purposes,
and there is private memory of the backends (the latter could be
a significant number).

Why do you want that number?
If it is for tuning of shared_buffers, that is probably a bad way.
Even if shared_buffers is set much higher than you'd actually need
it, sooner or later all of your shared_buffers will be used, unless
your database is tiny.

Yours,
Laurenz Albe

#3Luki Rustianto
lukirus@gmail.com
In reply to: Laurenz Albe (#2)
Re: How to find how much postgresql use the memory?

Ok I see. So what's the best way to find optimum value for various
memory-related setting of postgresql ?

On Tue, Jan 20, 2009 at 3:27 PM, Albe Laurenz <laurenz.albe@wien.gv.at>wrote:

Show quoted text

I wonder how to find how much postgresql memory consumption
is (real, not allocated) ?

You could install the pg_buffercache contrib and

SELECT pg_size_pretty(
CAST(current_setting('block_size') AS integer) * max(bufferid)
)
FROM pg_buffercache WHERE relfilenode IS NOT NULL;

to find out how much of the shared memory in the buffer pool is
actually in use.

There is some extra shared memory that is used for other purposes,
and there is private memory of the backends (the latter could be
a significant number).

Why do you want that number?
If it is for tuning of shared_buffers, that is probably a bad way.
Even if shared_buffers is set much higher than you'd actually need
it, sooner or later all of your shared_buffers will be used, unless
your database is tiny.

Yours,
Laurenz Albe

#4Laurenz Albe
laurenz.albe@cybertec.at
In reply to: Luki Rustianto (#3)
Re: How to find how much postgresql use the memory?

Luki Rustianto wrote:

Ok I see. So what's the best way to find optimum value for
various memory-related setting of postgresql ?

How much memory is there in the machine?
Are there other programs running or is the machine dedicated to the database?
Are the queries you run complicated (order / hash large amounts of data) or simple?

Maybe a rough guideline would be:
Set work_mem to the amount of memory a query will regularly need for sorting and such,
set max_connections to the maximum number of connections you need.

Then figure out how much memory you want to dedicate to the database,
subtract work_mem * max_connections from that value and set
shared_buffers to the result value.

Make sure you adjust the kernel parameters accordingly, see
http://www.postgresql.org/docs/current/static/kernel-resources.html

Tom Lane suggested in
http://archives.postgresql.org/pgsql-general/2004-02/msg00471.php
that it might be better to set shared_buffers "relatively
small" and let the filesystem cache do the buffering, so that's
another way you can go. His advice is usually good.

Yours,
Laurenz Albe

#5Scott Marlowe
scott.marlowe@gmail.com
In reply to: Laurenz Albe (#4)
Re: How to find how much postgresql use the memory?

On Wed, Jan 21, 2009 at 10:15 AM, Albe Laurenz <laurenz.albe@wien.gv.at> wrote:

Tom Lane suggested in
http://archives.postgresql.org/pgsql-general/2004-02/msg00471.php
that it might be better to set shared_buffers "relatively
small" and let the filesystem cache do the buffering, so that's
another way you can go. His advice is usually good.

Note that for transactional databases that are too large to fit the
dataset into memory (think a 16Gig machine running a 100G
transactional database) this is very true. The shared_buffers seldom
get reused and you'll get faster throughput with lower shared_buffers.

In testing a 40Gig db on a 32Gig machine, I got the best pgbench
numbers with a shared_buffers setting in the hundreds of megs range.