RE: Performance tuning for linux, 1GB RAM, dual CPU?

Started by Christian Bucanacover 26 years ago8 messagesgeneral
Jump to latest
#1Christian Bucanac
christian.bucanac@mindark.com

Sure, here it comes.

/Buckis

-----Original Message-----
From: Adam Manock [mailto:abmanock@planetcable.net]
Sent: den 11 juli 2001 16:26
To: Christian Bucanac
Subject: RE: [GENERAL] Performance tuning for linux, 1GB RAM, dual CPU?

We should move this discussion back to the list... others may benefit.
If you're agreeable please forward your previous message to the list.

Adam

At 03:02 PM 7/11/01 +0200, you wrote:

Yes, that is right. We did
echo 805306368 >/proc/sys/kernel/shmax

805306368 => 768MB

We could not allocate 98304 buffers as you suggest, 96498 was the maximum.
Seems like postmaster/postgres uses some mem for other things.

/Buckis

-----Original Message-----
From: Adam Manock [mailto:abmanock@planetcable.net]
Sent: den 11 juli 2001 14:10
To: Christian Bucanac
Subject: RE: [GENERAL] Performance tuning for linux, 1GB RAM, dual CPU?

I assume you had to bump up the shmmax and shmall
as mentioned in:

http://www.ca.postgresql.org/docs/admin/kernel-resources.html

Your parameters look good, close to what I am thinking
I am going to try 768M (98304) for buffers and 6144 (6144 * 32 = 192M)
for sort mem. This way with the DB server serving a max of 32 application
servers the kernel and other processes should still have the last 64Mb RAM.
This will be my starting point for testing anyway.

Adam

Adam

At 10:00 AM 7/11/01 +0200, you wrote:

Hi!

We have setup a Pentium III with dual 900MHz processors computer with

RAIDed

disks. It is running Slackware because it is the most reliable

distribution.

We have compiled and are using kernel 2.4.2 to better support dual
processors. The RAM is 1GB. The only perfomance tuning we have done is to
let the postmaster/postgres allocate as much RAM as possible. The shared
buffers (-B option) is set to 96498 which is about 800MB of RAM and the

sort

mem size (-S) is set to 5120. The sort mem size is set this low because
there can be many connections to the database.

These optimizations are enough for out environment and usage of the

database

so we have not bothered to optimize it further. The database is

practically

running in RAM.

/Buckis

-----Original Message-----
From: Adam Manock [mailto:abmanock@planetcable.net]
Sent: den 10 juli 2001 13:45
To: pgsql-general@postgresql.org
Subject: [GENERAL] Performance tuning for linux, 1GB RAM, dual CPU?

Hi,

I am about to put a 7.1.2 server into production on RedHat 7.1
The server will be dedicated to PostgreSQL, running a bare minimum of
additional services.
If anyone has already tuned the configurable parameters on a dual PIII w/
1GB RAM then I
will have a great place to start for my performance tuning!
When I'm done I'll be posting my results here for the next first timer

that

Show quoted text

comes along.

Thanks in advance,

Adam

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Christian Bucanac (#1)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

Christian Bucanac <christian.bucanac@mindark.com> writes:

I am going to try 768M (98304) for buffers and 6144 (6144 * 32 = 192M)
for sort mem. This way with the DB server serving a max of 32 application
servers the kernel and other processes should still have the last 64Mb RAM.

This is almost certainly a lousy idea. You do *not* want to chew up all
available memory for PG shared buffers; you should leave a good deal of
space for kernel-level disk buffers.

Other fallacies in the above: (1) you're assuming the SortMem parameter
applies once per backend, which is not the case (it's once per sort or
hash step in a query, which could be many times per backend); (2) you're
not allowing *anything* for any space usage other than shared disk
buffers and sort memory.

The rule of thumb I recommend is to use (at most) a quarter of real RAM
for shared disk buffers. I don't have hard measurements to back that
up, but I think it's a lot more reasonable as a starting point than
three-quarters of RAM.

regards, tom lane

#3Steve Wolfe
steve@iboats.com
In reply to: Christian Bucanac (#1)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

Christian Bucanac <christian.bucanac@mindark.com> writes:

I am going to try 768M (98304) for buffers and 6144 (6144 * 32 =

192M)

for sort mem. This way with the DB server serving a max of 32

application

servers the kernel and other processes should still have the last

64Mb RAM.

This is almost certainly a lousy idea. You do *not* want to chew up all
available memory for PG shared buffers; you should leave a good deal of
space for kernel-level disk buffers.

I'll second that. The way that I tuned our installation was:

1. Make sure you have enough RAM that the data files are *always* in
cache, and that all apps have enough RAM available for them.
2. Increase shared buffers until there was no performance increase, then
double it.
3. Increase sort memory until there was no performance increase, then
double it.
4. Turn off fsync().
5. Make sure that #1 still applies.

In our system (1.5 gigs), that ended up being 128 megs of shared
buffers, and 64 megs for sorting. Some day, I'll probably increase the
shared buffers more (just because I can), but currently, Linux doesn't
seem to let me set SHMMAX over 128 megs. Some day I'll look into it. : )

steve

#4Adam Manock
abmanock@planetcable.net
In reply to: Tom Lane (#2)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

This is almost certainly a lousy idea. You do *not* want to chew up all
available memory for PG shared buffers; you should leave a good deal of
space for kernel-level disk buffers.

I decided to start high on buffers because of Bruce's:
http://www.ca.postgresql.org/docs/hw_performance/
From that I get the impression that operations using kernel disk buffer
cache are considerably more expensive than if the data was in shared
buffer cache, and that increasing PG's memory usage until the system
is almost using swap is The Right Thing To Do. Has anyone got real
world test data to confirm or refute this??
If not, then I am going to need to find or create a benchmarking program
to load down PG against a fake multi-gigabyte "production" database.
Or I could wait a week to see what RedHat does to tune their
implementation of PG :-)

Adam

#5Justin Clift
justin@postgresql.org
In reply to: Christian Bucanac (#1)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

Hi Adam,

There are a few links to benchmark-type things you might find useful at
:

http://techdocs.postgresql.org/oresources.php#benchmark

Hope they're useful.

:-)

Regards and best wishes,

Justin Clift

Adam Manock wrote:

Show quoted text

This is almost certainly a lousy idea. You do *not* want to chew up all
available memory for PG shared buffers; you should leave a good deal of
space for kernel-level disk buffers.

I decided to start high on buffers because of Bruce's:
http://www.ca.postgresql.org/docs/hw_performance/
From that I get the impression that operations using kernel disk buffer
cache are considerably more expensive than if the data was in shared
buffer cache, and that increasing PG's memory usage until the system
is almost using swap is The Right Thing To Do. Has anyone got real
world test data to confirm or refute this??
If not, then I am going to need to find or create a benchmarking program
to load down PG against a fake multi-gigabyte "production" database.
Or I could wait a week to see what RedHat does to tune their
implementation of PG :-)

Adam

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#6Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Tom Lane (#2)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

Christian Bucanac <christian.bucanac@mindark.com> writes:

I am going to try 768M (98304) for buffers and 6144 (6144 * 32 = 192M)
for sort mem. This way with the DB server serving a max of 32 application
servers the kernel and other processes should still have the last 64Mb RAM.

This is almost certainly a lousy idea. You do *not* want to chew up all
available memory for PG shared buffers; you should leave a good deal of
space for kernel-level disk buffers.

Other fallacies in the above: (1) you're assuming the SortMem parameter
applies once per backend, which is not the case (it's once per sort or
hash step in a query, which could be many times per backend); (2) you're
not allowing *anything* for any space usage other than shared disk
buffers and sort memory.

The rule of thumb I recommend is to use (at most) a quarter of real RAM
for shared disk buffers. I don't have hard measurements to back that
up, but I think it's a lot more reasonable as a starting point than
three-quarters of RAM.

In my testing with *particluar* environment (Linux kernel 2.2.x,
pgbench), it was indicated that too many shared buffers reduced the
performance even though there was lots of memory, say 1GB. I'm not
sure why, but I suspect there is a siginificant overhead to lookup
shared buffers.
--
Tatsuo Ishii

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tatsuo Ishii (#6)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

Tatsuo Ishii <t-ishii@sra.co.jp> writes:

In my testing with *particluar* environment (Linux kernel 2.2.x,
pgbench), it was indicated that too many shared buffers reduced the
performance even though there was lots of memory, say 1GB. I'm not
sure why, but I suspect there is a siginificant overhead to lookup
shared buffers.

Regular lookups use a hash table, and shouldn't suffer much from
degraded performance as NBuffers rises. However, there are some
operations that do a linear scan of all the buffers --- table deletion
comes to mind. Perhaps your test was exercising one of these.

pgbench doesn't do table deletions of course... hmm... the only
such loop in bufmgr.c that looks like it would be executed during
normal transactions is BufferPoolCheckLeak(). Maybe we should
make that routine be a no-op unless assert checking is turned on?
Have we reached the point where performance is more interesting than
error checking? It'd be interesting to retry your results with
BufferPoolCheckLeak() reduced to "return false".

Another factor, not under our control, is that if the shared memory
region gets too large the kernel may decide to swap out portions of
it that haven't been touched lately. This of course is completely
counterproductive, especially if what gets swapped is a dirty buffer,
which'll eventually have to be read back in and then written to where
it should have gone. This is the main factor behind my thought that you
don't want to skimp on kernel disk buffer space --- any memory pressure
in the system should be resolvable by dropping kernel disk buffers, not
by starting to swap shmem or user processes.

regards, tom lane

#8snpe
snpe@snpe.co.yu
In reply to: Tom Lane (#7)
Re: Performance tuning for linux, 1GB RAM, dual CPU?

Another factor, not under our control, is that if the shared memory
region gets too large the kernel may decide to swap out portions of
it that haven't been touched lately. This of course is completely
counterproductive, especially if what gets swapped is a dirty buffer,
which'll eventually have to be read back in and then written to where
it should have gone. This is the main factor behind my thought that you
don't want to skimp on kernel disk buffer space --- any memory pressure
in the system should be resolvable by dropping kernel disk buffers, not
by starting to swap shmem or user processes.

You can lock shared memory for this problem ?

regards
haris peco