Best tweak for fast results.. ?
need input on parameter values on confs...
our database is getting 1000 transactions/sec on peak periods..
sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244
queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..
TIA
On Tue, 2003-08-26 at 08:42, JM wrote:
need input on parameter values on confs...
our database is getting 1000 transactions/sec on peak periods..
sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..
Could it be that 1000tps is as good as your h/w can do? You didn't
mention what kind and speed of CPU(s), SCSI-or-IDE controller(s) and
type/speed of disk(s) you have.
--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA
"Oh, great altar of passive entertainment, bestow upon me thy
discordant images at such speed as to render linear thought impossible"
Calvin, regarding TV
actually, isin't 1000 tps pretty good?
Ron Johnson wrote:
Show quoted text
On Tue, 2003-08-26 at 08:42, JM wrote:
need input on parameter values on confs...
our database is getting 1000 transactions/sec on peak periods..
sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..Could it be that 1000tps is as good as your h/w can do? You didn't
mention what kind and speed of CPU(s), SCSI-or-IDE controller(s) and
type/speed of disk(s) you have.
On Tuesday 26 August 2003 14:42, JM wrote:
need input on parameter values on confs...
our database is getting 1000 transactions/sec on peak periods..
sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..
Assuming you're getting good query plans (check the output of EXPLAIN
ANALYSE)...
Start by checking the output of vmstat/iostat during busy periods - this will
tell you whether CPU/IO/RAM is the bottleneck.
There is a good starter for tuning PG at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
Assuming your rows aren't too wide, they're probably mostly cached by Linux,
so you probably don't want to overdo the shared buffers/sort memory and make
sure the effective cache size is accurate.
--
Richard Huxton
Archonet Ltd
On Tue, 26 Aug 2003, JM wrote:
need input on parameter values on confs...
our database is getting 1000 transactions/sec on peak periods..
sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244
1: Upgrade your kernel. 2.4.7 on RH3 was updated to 2.4.18-24 in March,
and the 2.4.18 kernel is MUCH faster and has many bugs squashed.
2: Upgrade to the latest stable version of postgresql, 7.3.4
3: Make sure your kernels file-nr settings, and shm settings are big
enough to handle load.
4: Edit the $PGDATA/postgresql.conf file to reflect all that extra cache
you've got etc....
shared_buffers = 5000
sort_mem = 16384
effective_cache_size = (size of cache/buffer mem divided by 8192)
5: Look at moving WAL to it's own spindle(s), as it is often the choke
point when doing lots of transactions.
6: Look at using more drives in a RAID 1+0 array for the data (as well as
a seperate one for WAL if you can afford it.)
7: Make sure your drives are mounted noatime.
8: If you don't mind living dangerously, or the data can be reproduced
from source files (i.e. catastrophic failure of your data set won't set
you back) look at both mounting the drives async (the default for linux,
slightly dangerous) and turning fsync off (quite dangerous, in case of
crashed hardware / OS, you very well might lose data.