Best tweak for fast results.. ?

Started by jeromeover 22 years ago5 messagesgeneral
Jump to latest
#1jerome
jerome@gmanmi.tv

need input on parameter values on confs...

our database is getting 1000 transactions/sec on peak periods..

sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244

queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..

TIA

#2Ron Johnson
ron.l.johnson@cox.net
In reply to: jerome (#1)
Re: [PERFORM] Best tweak for fast results.. ?

On Tue, 2003-08-26 at 08:42, JM wrote:

need input on parameter values on confs...

our database is getting 1000 transactions/sec on peak periods..

sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244

queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..

Could it be that 1000tps is as good as your h/w can do? You didn't
mention what kind and speed of CPU(s), SCSI-or-IDE controller(s) and
type/speed of disk(s) you have.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"Oh, great altar of passive entertainment, bestow upon me thy
discordant images at such speed as to render linear thought impossible"
Calvin, regarding TV

#3Dennis Gearon
gearond@fireserve.net
In reply to: Ron Johnson (#2)
Re: [PERFORM] Best tweak for fast results.. ?

actually, isin't 1000 tps pretty good?

Ron Johnson wrote:

Show quoted text

On Tue, 2003-08-26 at 08:42, JM wrote:

need input on parameter values on confs...

our database is getting 1000 transactions/sec on peak periods..

sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244

queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..

Could it be that 1000tps is as good as your h/w can do? You didn't
mention what kind and speed of CPU(s), SCSI-or-IDE controller(s) and
type/speed of disk(s) you have.

#4Richard Huxton
dev@archonet.com
In reply to: jerome (#1)
Re: Best tweak for fast results.. ?

On Tuesday 26 August 2003 14:42, JM wrote:

need input on parameter values on confs...

our database is getting 1000 transactions/sec on peak periods..

sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244

queries are just simple select statements based on timestamps, varchars...
less on joins... on a 300K rows..

Assuming you're getting good query plans (check the output of EXPLAIN
ANALYSE)...

Start by checking the output of vmstat/iostat during busy periods - this will
tell you whether CPU/IO/RAM is the bottleneck.

There is a good starter for tuning PG at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php

Assuming your rows aren't too wide, they're probably mostly cached by Linux,
so you probably don't want to overdo the shared buffers/sort memory and make
sure the effective cache size is accurate.
--
Richard Huxton
Archonet Ltd

#5scott.marlowe
scott.marlowe@ihs.com
In reply to: jerome (#1)
Re: Best tweak for fast results.. ?

On Tue, 26 Aug 2003, JM wrote:

need input on parameter values on confs...

our database is getting 1000 transactions/sec on peak periods..

sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244

1: Upgrade your kernel. 2.4.7 on RH3 was updated to 2.4.18-24 in March,
and the 2.4.18 kernel is MUCH faster and has many bugs squashed.

2: Upgrade to the latest stable version of postgresql, 7.3.4

3: Make sure your kernels file-nr settings, and shm settings are big
enough to handle load.

4: Edit the $PGDATA/postgresql.conf file to reflect all that extra cache
you've got etc....

shared_buffers = 5000
sort_mem = 16384
effective_cache_size = (size of cache/buffer mem divided by 8192)

5: Look at moving WAL to it's own spindle(s), as it is often the choke
point when doing lots of transactions.

6: Look at using more drives in a RAID 1+0 array for the data (as well as
a seperate one for WAL if you can afford it.)

7: Make sure your drives are mounted noatime.

8: If you don't mind living dangerously, or the data can be reproduced
from source files (i.e. catastrophic failure of your data set won't set
you back) look at both mounting the drives async (the default for linux,
slightly dangerous) and turning fsync off (quite dangerous, in case of
crashed hardware / OS, you very well might lose data.