PostgreSQL and memory usage

Started by Dann Corbitabout 23 years ago8 messages
#1Dann Corbit
DCorbit@connx.com

I have a machine with 4 CPU's and 2 gigabytes of physical ram.

I would like to get PostgreSQL to use as much memory as possible. I
can't seem to get PostgreSQL to use more than 100 megabytes or so.

How can I optimize the use of PostgreSQL to get the maximum throughput
in a configuration like that?

Are there any memory usage/tuning documents I can read?

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Dann Corbit (#1)
Re: PostgreSQL and memory usage

"Dann Corbit" <DCorbit@connx.com> writes:

I have a machine with 4 CPU's and 2 gigabytes of physical ram.
I would like to get PostgreSQL to use as much memory as possible. I
can't seem to get PostgreSQL to use more than 100 megabytes or so.

You should not assume that more is necessarily better.

In many practical situations, it's better to leave the majority of RAM
free for kernel disk caching.

regards, tom lane

#3Dann Corbit
DCorbit@connx.com
In reply to: Tom Lane (#2)
Re: PostgreSQL and memory usage

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Monday, January 06, 2003 7:30 PM
To: Dann Corbit
Cc: pgsql-hackers@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [GENERAL] PostgreSQL and memory usage

"Dann Corbit" <DCorbit@connx.com> writes:

I have a machine with 4 CPU's and 2 gigabytes of physical

ram. I would

like to get PostgreSQL to use as much memory as possible. I can't
seem to get PostgreSQL to use more than 100 megabytes or so.

You should not assume that more is necessarily better.

In many practical situations, it's better to leave the
majority of RAM free for kernel disk caching.

In any case, I would like to know what knobs and dials are available to
turn and what each of them means.
In at least one instance, the whole database should fit into memory. I
would think that would be faster than any sort of kernel disk caching.

#4scott.marlowe
scott.marlowe@ihs.com
In reply to: Dann Corbit (#3)
Re: PostgreSQL and memory usage

Hi Dann, I took hackers out of the list as this isn't really a hacking
issue, but I added in performance as this definitely applies there.

There are generally two areas of a database server you have to reconfigure
to use that extra memory. The first is the kernel's shared memory
settings.

On a linux box that has sysconf installed this is quite easy. If it isn't
installed, install it, as it's much easier to manipulate your kernel's
settings using sysctl than it is with editing rc.local.

First, get root. Then, use 'sysctl -a|grep shm' to get a list of all the
shared memory settings handled by sysctl.

On a default redhat install, we'll get something like this:

kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 33554432

On my bigger box, it's been setup to have this:

kernel.shmmni = 4096
kernel.shmall = 32000000
kernel.shmmax = 256000000

To make changes that stick around, edit the /etc/sysctl.conf file to have
lines that look kinda like those above. To make the changes to the
/etc/sysctl.conf file take effect, use 'sysctl -p'.

Next, as the postgres user, edit $PGDATA/postgresql.conf and increase the
number of shared buffers. On most postgresql installations this number is
multiplied by 8k to get the amount of ram being allocated, since
postgresql allocates share buffers in blocks the same size as what it uses
on the dataset. To allocate 256 Megs of buffers (that's what I use, seems
like a nice large chunk, but doesn't starve my other processes or system
file cache) set it to 32768.

Be careful how big you make your sort size. I haven't seen a great
increase in speed on anything over 8 or 16 megs, while memory usage can
skyrocket under heavy parallel load with lots of sorts, since sort memory
is PER SORT maximum.

Then do the old pg_ctl reload and you should be cooking with gas.

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: scott.marlowe (#4)
Re: PostgreSQL and memory usage

"scott.marlowe" <scott.marlowe@ihs.com> writes:

Then do the old pg_ctl reload and you should be cooking with gas.

One correction: altering the number of shared buffers requires an actual
postmaster restart.

regards, tom lane

#6Fred Moyer
fred@digicamp.com
In reply to: scott.marlowe (#4)
Re: [PERFORM] PostgreSQL and memory usage

To put this usage of shared buffers in perspective would you mind kindly
let us know your total amount of system ram? Without hearing what
percentage of memory used as shared buffers (assuming is the primary
application being using here)

I have always taken the 'more is better' approach with shared buffers but
would like to know what in terms of percentages other people are using. I
have been using 50% of system ram (2 out of 4 gigs) for shared buffers
(and corresponding shmmax values) and it has been working great. I
haven't tweaked the kernel yet to get more than 2 gigs shmmax so I can't
speak for a setup using over 50%. I've been using between 256 and 512
megs sort memory which sounds like a little much from what I'm hearing
here.

Thanks

Fred

Show quoted text

Hi Dann, I took hackers out of the list as this isn't really a hacking
issue, but I added in performance as this definitely applies there.

There are generally two areas of a database server you have to
reconfigure to use that extra memory. The first is the kernel's shared
memory settings.

On a linux box that has sysconf installed this is quite easy. If it
isn't installed, install it, as it's much easier to manipulate your
kernel's settings using sysctl than it is with editing rc.local.

First, get root. Then, use 'sysctl -a|grep shm' to get a list of all
the shared memory settings handled by sysctl.

On a default redhat install, we'll get something like this:

kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 33554432

On my bigger box, it's been setup to have this:

kernel.shmmni = 4096
kernel.shmall = 32000000
kernel.shmmax = 256000000

To make changes that stick around, edit the /etc/sysctl.conf file to
have lines that look kinda like those above. To make the changes to
the /etc/sysctl.conf file take effect, use 'sysctl -p'.

Next, as the postgres user, edit $PGDATA/postgresql.conf and increase
the number of shared buffers. On most postgresql installations this
number is multiplied by 8k to get the amount of ram being allocated,
since
postgresql allocates share buffers in blocks the same size as what it
uses on the dataset. To allocate 256 Megs of buffers (that's what I
use, seems like a nice large chunk, but doesn't starve my other
processes or system file cache) set it to 32768.

Be careful how big you make your sort size. I haven't seen a great
increase in speed on anything over 8 or 16 megs, while memory usage can
skyrocket under heavy parallel load with lots of sorts, since sort
memory is PER SORT maximum.

Then do the old pg_ctl reload and you should be cooking with gas.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly

#7mlw
pgsql@mohawksoft.com
In reply to: Dann Corbit (#1)
Re: PostgreSQL and memory usage

Dann Corbit wrote:

I have a machine with 4 CPU's and 2 gigabytes of physical ram.

I would like to get PostgreSQL to use as much memory as possible. I
can't seem to get PostgreSQL to use more than 100 megabytes or so.

How can I optimize the use of PostgreSQL to get the maximum throughput
in a configuration like that?

Are there any memory usage/tuning documents I can read?

I'm not sure if there is a document, but there are some things you can do.

First, size the shared memory pool. I've been told by Tom that too much
is actually slower, but making it MUCH bigger than the default does help
a lot.

shared_buffer=2048
(Maybe larger, experiment)

Sort memory, this is useful for large queries that do sorting, it is
allocated as needed on a per process basis. If you run large queries
that sort, this can speed you up instead of sorting to disk.

sort_mem=16384

OK, lastly, do not dispare if PostgreSQL seems not to be using as much
memory as is in your system. Don't forget the OS disk cache is important
too.

#8scott.marlowe
scott.marlowe@ihs.com
In reply to: Fred Moyer (#6)
Re: [PERFORM] PostgreSQL and memory usage

Oh yeah, sorry. My box has 1.5 gig ram, but it is an application server
that runs things other than just postgresql. It also runs:

Apache
Real Server
OpenLDAP
Squid
Samba

with all those services fired up and running, as well as postgresql with
256 Megs of shared buffer, I have about 900 megs of cache and 100 megs
free ram. Since a lot of data is flying off the hard drives at any given
time, favoring one service (database) over the others makes little sense
for me, and I've found that there was little or no performance gain from
256 Megs ram over say 128 meg or 64 meg.

We run about 50 databases averaging about 25megs each or so (backed up,
it's about 50 to 75 Megs on the machine's hard drives) so there's no
way for ALL the data to fit into memory.

On Tue, 7 Jan 2003, Fred Moyer wrote:

Show quoted text

To put this usage of shared buffers in perspective would you mind kindly
let us know your total amount of system ram? Without hearing what
percentage of memory used as shared buffers (assuming is the primary
application being using here)

I have always taken the 'more is better' approach with shared buffers but
would like to know what in terms of percentages other people are using. I
have been using 50% of system ram (2 out of 4 gigs) for shared buffers
(and corresponding shmmax values) and it has been working great. I
haven't tweaked the kernel yet to get more than 2 gigs shmmax so I can't
speak for a setup using over 50%. I've been using between 256 and 512
megs sort memory which sounds like a little much from what I'm hearing
here.