Upgrade to dual processor machine?
hi all,
will an upgrade to a dual processor machine
noticeably increase performance of a postgresql server?
load average now often is about 4.0 - 8.5 - and I'll
have got to do something sooner or later...
any help is appreciated...
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
Hi Henrik,
It'd be helpful to know the other specifics of the server, and a bit
about the workload the server has.
i.e.
- Processor type and speed
- Memory
- Disk configuration
- OS
- Do you do other stuff on it, apart from PostgreSQL?
- How many clients simultaneously connecting to it?
- What do the clients connect with? JDBC/ODBC/libpq/etc?
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?
:-)
Regards and best wishes,
Justin Clift
Henrik Steffen wrote:
hi all,
will an upgrade to a dual processor machine
noticeably increase performance of a postgresql server?load average now often is about 4.0 - 8.5 - and I'll
have got to do something sooner or later...any help is appreciated...
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrertop concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
-----------------------------------------------------------------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
"Henrik Steffen" <steffen@city-map.de> writes:
will an upgrade to a dual processor machine
noticeably increase performance of a postgresql server?
Assuming you have more than 1 concurrent client, it likely
will. Whether it will be a huge performance improvement depends on the
other characteristics of the workload (e.g. is it I/O bound or CPU
bound?).
Cheers,
Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
Hi Justin,
here a little more information:
- Processor type and speed
Intel Pentium IV, 1.6 GHz
- Memory
1024 MB ECC-RAM
- Disk configuration
2 x 60 GB IDE (Raid 0)
- OS
Redhat Linux
- Do you do other stuff on it, apart from PostgreSQL?
No, it's a dedicated database server
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.
- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?
no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff
So, what do you suggest to gain more performance?
Thanks in advance,
Show quoted text
Hi Henrik,
It'd be helpful to know the other specifics of the server, and a bit
about the workload the server has.i.e.
- Processor type and speed
- Memory
- Disk configuration
- OS- Do you do other stuff on it, apart from PostgreSQL?
- How many clients simultaneously connecting to it?
- What do the clients connect with? JDBC/ODBC/libpq/etc?- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?:-)
Regards and best wishes,
Justin Clift
Henrik Steffen wrote:
hi all,
will an upgrade to a dual processor machine
noticeably increase performance of a postgresql server?load average now often is about 4.0 - 8.5 - and I'll
have got to do something sooner or later...any help is appreciated...
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrertop concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
-----------------------------------------------------------------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
"Henrik Steffen" <steffen@city-map.de> writes:
- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
Well, it's a thin Perl wrapper over libpq (which is the C client
API). You said you're using mod_perl: you may wish to consider using
DBI and DBD::Pg instead of Pg.pm, so you can make use of persistent
connections using Apache::DBI.
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?
no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff
No, Justin is referring to the memory-related configuration options in
postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the
like.
So, what do you suggest to gain more performance?
IMHO, dual processors would likely be a good performance improvement.
Cheers,
Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
Hi Henrik,
Ok, you're machine is doing a decent amount of work, and will need
looking at carefully.
Going to get more specific about some stuff, as it'll definitely assist
with giving you proper guidance here.
- Have you run any system-performance tools apart from top, to figure
out how the various parts of your system are operating?
For example, by looking into and measuring the different parts of your
system, you may find you have several processes simultaneously waiting
to execute purely because the disk drives can't keep up with the
requests. The solution may turn out to be upgrading your disks instead
of your CPU's (example only). Without taking measurements to the point
of understanding what's going on, you'll only be guessing.
The most concerning aspect at the moment is this:
"> - Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?
no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff"
This is telling me that the system is operating close to using all it's
memory with running processes. *Bad* for this kind of thing. The
default memory configuration for PostgreSQL is very lean and causes high
CPU load and slow throughput. You don't seem to have enough spare
memory at the moment to really try adjusting this upwards. :(
Important question, how much memory can you get into that server? Could
you do 3GB or more?
Something that would be *really nice* is if you have a second server
with the same configuration hanging around that you can try stuff on.
For example, loading it with a copy of all your data, changing the
memory configuration, then testing it.
Further system specific details needed:
- Which version of the Linux kernel, and of RedHat? Different version
of the Linux kernel do things differently. For example version 2.4.3
does virtual memory differently than say version 2.4.17.
- If you do a ps (ps -ef) during a busy time, how many instances of the
PostgreSQL process do you see in memory? This will tell you how many
clients have an open connection to the database at any time.
- How much data is in your database(s)? Just to get an idea of your
volume of data.
- If disk performance turns out to be the problem, would you consider
moving to higher-end hard drives? This will probably mean an Ultra160
or Ultra320 SCSI card, and drives to match. That's not going to be
totally cheap, but if you have a decent budget then it might be ok.
As you can see, this could take a bit of time an effort to get right.
Regards and best wishes,
Justin Clift
Henrik Steffen wrote:
Hi Justin,
here a little more information:
- Processor type and speed
Intel Pentium IV, 1.6 GHz
- Memory
1024 MB ECC-RAM
- Disk configuration
2 x 60 GB IDE (Raid 0)
- OS
Redhat Linux
- Do you do other stuff on it, apart from PostgreSQL?
No, it's a dedicated database server
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buffSo, what do you suggest to gain more performance?
Thanks in advance,
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
FWIW, in summer I have done a little bit of testing on one of our
dual-cpu machines; among this I have been running OSDB (open source
database benchmark), 32 simulated clients, against Postgres (7.2.1)/Linux
(2.4.18), once bootet with maxcpus=1 and once with maxcpus=2; if I
remember correctly I saw something between 80-90% performance improvement
on the IR benchmark with the second cpu activated.
Note the run was completely cpu-bound, neither harddisk nor memory was the
bottleneck, so you may see less of an improvement if other parts of your
system are the limit; but Postgres itself appears to make use of the
available cpus quite nicely.
Regards
--
Helge Bahmann <bahmann@math.tu-freiberg.de> /| \__
The past: Smart users in front of dumb terminals /_|____\
_/\ | __)
$ ./configure \\ \|__/__|
checking whether build environment is sane... yes \\/___/ |
checking for AIX... no (we already did this) |
On Mon, 11 Nov 2002, Henrik Steffen wrote:
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.
Aha. What kind of web-side data caching are you doing? That alone can
drop your load down to < 1. Even something like a 1-hour cache, or
something you can manually expire can work amazing wonders for database
usage. So far, the only thing we've found that doesn't really fit this
model are full text searches.
Here, the biggest difference to our DB server was caused by *not* having
all of our 9 webservers doing 50+ connections per second, which we
achieved mainly through caching. Adding another CPU will work as well,
but as far as a long-term, not just throwing hardware at the problem
kind of solution goes, see if you can get caching worked in there
somehow.
Since you know you're using Pg.pm (switch to DBI::pg, trust me on this
one), you should have little problem either caching your result set or
even the whole resulting page with select non-cachable parts. Not only
will that reduce page-load time, but the strain on your database as
well.
--
Shaun M. Thomas INN Database Administrator
Phone: (309) 743-0812 Fax : (309) 743-0830
Email: sthomas@townnews.com Web : www.townnews.com
On Mon, 11 Nov 2002, Shaun Thomas wrote:
On Mon, 11 Nov 2002, Henrik Steffen wrote:
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.Aha. What kind of web-side data caching are you doing? That alone can
drop your load down to < 1. Even something like a 1-hour cache, or
something you can manually expire can work amazing wonders for database
usage. So far, the only thing we've found that doesn't really fit this
model are full text searches.Here, the biggest difference to our DB server was caused by *not* having
all of our 9 webservers doing 50+ connections per second, which we
achieved mainly through caching. Adding another CPU will work as well,
but as far as a long-term, not just throwing hardware at the problem
kind of solution goes, see if you can get caching worked in there
somehow.Since you know you're using Pg.pm (switch to DBI::pg, trust me on this
one), you should have little problem either caching your result set or
even the whole resulting page with select non-cachable parts. Not only
will that reduce page-load time, but the strain on your database as
well.
Agreed. I highly recommend squid as a caching proxy. Powerful, fast, and
Open source. It's included in most flavors of Linux. I'm sure it's
available as a port if not included in most BSDs as well.
On Mon, 11 Nov 2002, Henrik Steffen wrote:
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.
If you've got 50 simos, you could use more CPUs, whether your I/O bound or
not.
- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff
Hey, what is the "cached" field saying there? Is the machine caching a
whole bunch or just a little? If it's caching a whole bunch, look at
increasing your shmmax shmall settings and then the shared buffers in
postgresql.conf for better performance.
hi,
thanks for this information...
we are allready using squid as a transpartent www-accelerator,
this works very well and squid handles about 70 % out of all hits.
However, sometimes some search engines use to start
indexing more than 25 DIFFERENT documents per second, this is when things
start getting more difficult .... we have played around a little
with an ip-based bandwidth-regulation tool at squid-level, which
works quite well - though you'll have to add new search-engines
on demand.
But anyway - we still have to look at the facts: we have had a 200 %
increase of visitors and pageviews during the last 6 months.
Upgrading to DBI:pg is something I have been thinking about allready,
but as far as I know, I am allready using persistent connections with
mod_perl and Pg.pm, am I not???!
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
----- Original Message -----
From: "Shaun Thomas" <sthomas@townnews.com>
To: "Henrik Steffen" <steffen@city-map.de>
Cc: "Justin Clift" <justin@postgresql.org>; <pgsql-general@postgresql.org>
Sent: Monday, November 11, 2002 7:08 PM
Subject: Re: [GENERAL] Upgrade to dual processor machine?
Show quoted text
On Mon, 11 Nov 2002, Henrik Steffen wrote:
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.Aha. What kind of web-side data caching are you doing? That alone can
drop your load down to < 1. Even something like a 1-hour cache, or
something you can manually expire can work amazing wonders for database
usage. So far, the only thing we've found that doesn't really fit this
model are full text searches.Here, the biggest difference to our DB server was caused by *not* having
all of our 9 webservers doing 50+ connections per second, which we
achieved mainly through caching. Adding another CPU will work as well,
but as far as a long-term, not just throwing hardware at the problem
kind of solution goes, see if you can get caching worked in there
somehow.Since you know you're using Pg.pm (switch to DBI::pg, trust me on this
one), you should have little problem either caching your result set or
even the whole resulting page with select non-cachable parts. Not only
will that reduce page-load time, but the strain on your database as
well.--
Shaun M. Thomas INN Database Administrator
Phone: (309) 743-0812 Fax : (309) 743-0830
Email: sthomas@townnews.com Web : www.townnews.com
The cache-field is saying 873548K cached at the moment
Is this a "whole bunch of cache" in your opinion? Is it too much?
So, where do i find and change shmmax shmall settings ??
What should I put there?
What is a recommended value for shared buffers in postgresql.conf ?
FYI:
ps ax | grep -c postgres ==> shows 23 at the moment
however, w shows: load average 3.09, 2.01, 1.76
(this is low at the moment)
thanks again,
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
----- Original Message -----
From: "scott.marlowe" <scott.marlowe@ihs.com>
To: "Henrik Steffen" <steffen@city-map.de>
Cc: "Justin Clift" <justin@postgresql.org>; <pgsql-general@postgresql.org>
Sent: Tuesday, November 12, 2002 6:22 PM
Subject: Re: [GENERAL] Upgrade to dual processor machine?
Show quoted text
On Mon, 11 Nov 2002, Henrik Steffen wrote:
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.If you've got 50 simos, you could use more CPUs, whether your I/O bound or
not.- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buffHey, what is the "cached" field saying there? Is the machine caching a
whole bunch or just a little? If it's caching a whole bunch, look at
increasing your shmmax shmall settings and then the shared buffers in
postgresql.conf for better performance.
Hi Justin,
thanks for your answer, I will now try to deliver some more information
to you... but I am in particular a programmer, not a hacker ;-)) so please
excuse if I lack some knowledge in system things and stuff....
- Have you run any system-performance tools apart from top, to figure
out how the various parts of your system are operating?
nope. don't know any... which would you recommend for measurement of i/o
usage etc. ?
The solution may turn out to be upgrading your disks instead
of your CPU's (example only).
I will at least consider this... IDE disks are not that reliable either...
Important question, how much memory can you get into that server? Could
you do 3GB or more?
no, sorry - 1 GB is allready the upper limit... I consider migrating everything
to a new hardware, (dual?) intel xeon with perhaps even raid-v storage system with
a new upper limit of 12 GB RAM which will give me some upgrade-possibilies ... ;-))
Something that would be *really nice* is if you have a second server
with the same configuration hanging around that you can try stuff on.
For example, loading it with a copy of all your data, changing the
memory configuration, then testing it.
I actually DO have an identical second server, and the db is allready on it.
however, the system has a few problems concerning harddisk failuers and memory
problems (don't ever use it for running systems!! we had this server on the list
before... I almost gave up on this one, when suddenly all problems and crashes
were solved when moving to a different machine as suggested by tom lane ....)
... but for some testing purpose it sould be sufficient ;-))
- Which version of the Linux kernel, and of RedHat?
redhat - linux kernel 2.4.7-10
- If you do a ps (ps -ef) during a busy time, how many instances of the
PostgreSQL process do you see in memory? This will tell you how many
ients have an open connection to the database at any time.
up to 40 clients are running... right now it's 21 processes and w shows
a load average of 1.92, 1.58, 1.59
- How much data is in your database(s)? Just to get an idea of your
volume of data.
It's 3.6 GB at the moment in one database in 98 user tables.
- If disk performance turns out to be the problem, would you consider
moving to higher-end hard drives
allready considering ....
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
----- Original Message -----
From: "Justin Clift" <justin@postgresql.org>
To: "Henrik Steffen" <steffen@city-map.de>
Cc: <pgsql-general@postgresql.org>
Sent: Monday, November 11, 2002 8:44 AM
Subject: Re: [GENERAL] Upgrade to dual processor machine?
Show quoted text
Hi Henrik,
Ok, you're machine is doing a decent amount of work, and will need
looking at carefully.Going to get more specific about some stuff, as it'll definitely assist
with giving you proper guidance here.- Have you run any system-performance tools apart from top, to figure
out how the various parts of your system are operating?For example, by looking into and measuring the different parts of your
system, you may find you have several processes simultaneously waiting
to execute purely because the disk drives can't keep up with the
requests. The solution may turn out to be upgrading your disks instead
of your CPU's (example only). Without taking measurements to the point
of understanding what's going on, you'll only be guessing.The most concerning aspect at the moment is this:
"> - Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?
no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff"This is telling me that the system is operating close to using all it's
memory with running processes. *Bad* for this kind of thing. The
default memory configuration for PostgreSQL is very lean and causes high
CPU load and slow throughput. You don't seem to have enough spare
memory at the moment to really try adjusting this upwards. :(Important question, how much memory can you get into that server? Could
you do 3GB or more?Something that would be *really nice* is if you have a second server
with the same configuration hanging around that you can try stuff on.
For example, loading it with a copy of all your data, changing the
memory configuration, then testing it.Further system specific details needed:
- Which version of the Linux kernel, and of RedHat? Different version
of the Linux kernel do things differently. For example version 2.4.3
does virtual memory differently than say version 2.4.17.- If you do a ps (ps -ef) during a busy time, how many instances of the
PostgreSQL process do you see in memory? This will tell you how many
clients have an open connection to the database at any time.- How much data is in your database(s)? Just to get an idea of your
volume of data.- If disk performance turns out to be the problem, would you consider
moving to higher-end hard drives? This will probably mean an Ultra160
or Ultra320 SCSI card, and drives to match. That's not going to be
totally cheap, but if you have a decent budget then it might be ok.As you can see, this could take a bit of time an effort to get right.
Regards and best wishes,
Justin Clift
Henrik Steffen wrote:
Hi Justin,
here a little more information:
- Processor type and speed
Intel Pentium IV, 1.6 GHz
- Memory
1024 MB ECC-RAM
- Disk configuration
2 x 60 GB IDE (Raid 0)
- OS
Redhat Linux
- Do you do other stuff on it, apart from PostgreSQL?
No, it's a dedicated database server
- How many clients simultaneously connecting to it?
one webserver with max. 50 instances, approximately 10.000 users a day,
about 150.000 Pageviews daily. All pages are created on the fly using
mod_perl connecting to the db-server.- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buffSo, what do you suggest to gain more performance?
Thanks in advance,
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
hello,
Am I not allready using persistent connections with Pg.pm ?
It looks at least like it.... I only need a new connection
from webserver to db-server once a new webserver child is born.
well, anyway i am consindering updating to DBD::Pg of course...
it's only to change about 100.000 lines of perl code ....
No, Justin is referring to the memory-related configuration options in
postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the
like.
so, how am i supposed to tune these settings ??
thanks again,
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
----- Original Message -----
From: "Neil Conway" <neilc@samurai.com>
To: "Henrik Steffen" <steffen@city-map.de>
Cc: "Justin Clift" <justin@postgresql.org>; <pgsql-general@postgresql.org>
Sent: Monday, November 11, 2002 8:32 AM
Subject: Re: [GENERAL] Upgrade to dual processor machine?
Show quoted text
"Henrik Steffen" <steffen@city-map.de> writes:
- What do the clients connect with? JDBC/ODBC/libpq/etc?
I am using Pg.pm --- this is called libpq, isn't it?
Well, it's a thin Perl wrapper over libpq (which is the C client
API). You said you're using mod_perl: you may wish to consider using
DBI and DBD::Pg instead of Pg.pm, so you can make use of persistent
connections using Apache::DBI.- Have you configured the memory after installation of PostgreSQL, so
it's better optimised than the defaults?no - what should I do? Looking at 'top' right now, I see the following:
Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buffNo, Justin is referring to the memory-related configuration options in
postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the
like.So, what do you suggest to gain more performance?
IMHO, dual processors would likely be a good performance improvement.
Cheers,
Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Heinrik,
"So, where do i find and change shmmax shmall settings ??
What should I put there?
What is a recommended value for shared buffers in postgresql.conf ?"
There is no "recommended value." You have to calculate this relatively:
1) Figure out how much RAM your server has available for PostgreSQL. For
example, I have one server on which I allocate 256 mb for Apache, 128 mb for
linux, and thus have 512mb available for Postgres.
2) Calculate out the memory settings to use 70% of that amount of Ram in
regular usage. Please beware that sort_mem is *not* shared, meaning that it
will be multiplied by the number of concurrent requests requiring sorting.
Thus, your calculation (in K) should be:
250K +
8.2K * shared_buffers +
14.2K * max_connections +
sort_mem * average number of requests per minute
=====================================
memory available to postgresql in K * 0.7
You will also have to set SHMMAX and SHMMALL to accept this memory allocation.
Since shmmax is set in bytes, then I generally feel safe making it:
1024 * 0.5 * memory available to postgresql in K
Setting them is done simply:
$ echo 134217728 >/proc/sys/kernel/shmall
$ echo 134217728 >/proc/sys/kernel/shmmax
This is all taken from the postgresql documentation, with some experience:
http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html
--
-Josh Berkus
Aglio Database Solutions
San Francisco
Import Notes
Resolved by subject fallback
On Tue, 12 Nov 2002, Josh Berkus wrote:
Heinrik,
"So, where do i find and change shmmax shmall settings ??
What should I put there?What is a recommended value for shared buffers in postgresql.conf ?"
There is no "recommended value." You have to calculate this relatively:
1) Figure out how much RAM your server has available for PostgreSQL. For
example, I have one server on which I allocate 256 mb for Apache, 128 mb for
linux, and thus have 512mb available for Postgres.2) Calculate out the memory settings to use 70% of that amount of Ram in
regular usage. Please beware that sort_mem is *not* shared, meaning that it
will be multiplied by the number of concurrent requests requiring sorting.
Thus, your calculation (in K) should be:250K +
8.2K * shared_buffers +
14.2K * max_connections +
sort_mem * average number of requests per minute
=====================================
memory available to postgresql in K * 0.7You will also have to set SHMMAX and SHMMALL to accept this memory allocation.
Since shmmax is set in bytes, then I generally feel safe making it:
1024 * 0.5 * memory available to postgresql in KSetting them is done simply:
$ echo 134217728 >/proc/sys/kernel/shmall
$ echo 134217728 >/proc/sys/kernel/shmmaxThis is all taken from the postgresql documentation, with some experience:
http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html
Note that on RedHat boxes, you can also use the /etc/sysctl.conf file to
do this. It is considered the preferred method, and a little less obtuse
for beginners.
As root, run 'sysctl -a' to get a list of all possible system kernel
settings. 'sysctl -a | grep shm' will show you all the shared memory
settings as they are now. Edit the /etc/sysctl.conf file with the new
settings and use 'sysctl -p' to process the new settings. This way you
don't have to edit the /etc/rc.d/rc.local file to get the settings you
want.
On the subject of sort_mem, I've found that if your result sets are all
large (say 100+megs each) that as long as your sort mem isn't big enough
to hold the whole result set, the performance difference is negligable.
I.e. going from 4 meg to 16 meg of sort_mem for a 100 Meg result set
doesn't seem to help much at all. In fact, in some circumstances, it
seems that the smaller number is faster, especially under heavy parallel
load, since larger settings may result in undesired swapping out of other
processes to allocate memory for sorts.
In other words, it's faster to sort 20 results in 4 megs each if you
aren't causing swapping out, than it is to sort 20 results in 32 megs
each if that does cause things to swap out.
"Henrik Steffen" <steffen@city-map.de> writes:
No, Justin is referring to the memory-related configuration options in
postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the
like.so, how am i supposed to tune these settings ??
postgresql.conf
See the documentation:
http://developer.postgresql.org/docs/postgres/runtime-config.html
Cheers,
Neil
--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
Hello Josh!
This is was I figured out now:
1) RAM available: 1024 MB, there's nothing else but postgres on this
machine, so if I calculate 128 MB for Linux, there are 896 MB left
for Postgres.
2) 70 % of 896 MB is 627 MB
Now, if I follow your instructions:
250K +
8.2K * 128 (shared_buffers) = 1049,6K +
14.2K * 64 (max_connections) = 908,8K +
1024K * 5000 (average number of requests per minute) = 5120000K
===============================================================
5122208.4K ==> 5002.16 MB
this is a little bit more than I have available, isn't it? :(((
sure that this has got to be the "average number of requests per minute"
and not "per second" ? seems so much, doesn't it?
what am I supposed to do now?
thanks again,
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
----- Original Message -----
From: "Josh Berkus" <josh@agliodbs.com>
To: <pgsql-general@postgresql.org>
Cc: <steffen@city-map.de>
Sent: Tuesday, November 12, 2002 9:05 PM
Subject: Re: Upgrade to dual processor machine?
Heinrik,
"So, where do i find and change shmmax shmall settings ??
What should I put there?
What is a recommended value for shared buffers in postgresql.conf ?"
There is no "recommended value." You have to calculate this relatively:
1) Figure out how much RAM your server has available for PostgreSQL. For
example, I have one server on which I allocate 256 mb for Apache, 128 mb for
linux, and thus have 512mb available for Postgres.
2) Calculate out the memory settings to use 70% of that amount of Ram in
regular usage. Please beware that sort_mem is *not* shared, meaning that it
will be multiplied by the number of concurrent requests requiring sorting.
Thus, your calculation (in K) should be:
250K +
8.2K * shared_buffers +
14.2K * max_connections +
sort_mem * average number of requests per minute
=====================================
memory available to postgresql in K * 0.7
You will also have to set SHMMAX and SHMMALL to accept this memory allocation.
Since shmmax is set in bytes, then I generally feel safe making it:
1024 * 0.5 * memory available to postgresql in K
Setting them is done simply:
$ echo 134217728 >/proc/sys/kernel/shmall
$ echo 134217728 >/proc/sys/kernel/shmmax
This is all taken from the postgresql documentation, with some experience:
http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html
--
-Josh Berkus
Aglio Database Solutions
San Francisco
On 13 Nov 2002 at 8:29, Henrik Steffen wrote:
Hello Josh!
This is was I figured out now:
1) RAM available: 1024 MB, there's nothing else but postgres on this
machine, so if I calculate 128 MB for Linux, there are 896 MB left
for Postgres.2) 70 % of 896 MB is 627 MB
Now, if I follow your instructions:
250K +
8.2K * 128 (shared_buffers) = 1049,6K +
14.2K * 64 (max_connections) = 908,8K +
1024K * 5000 (average number of requests per minute) = 5120000K
===============================================================
5122208.4K ==> 5002.16 MBthis is a little bit more than I have available, isn't it? :(((
Obviously tuning depends upon application and you have to set the threshold by
trial and error.
I would suggest following from some recent discussions on such topics.
1)Set shared buffers somewhere between 500-600MB. Tha'ts going to be optimal
range for a Gig of RAM.
2) How big you database is? How much of it you need it in memory at any given
time? You need to get these figures while setting shared buffers. But still 500-
600MB seems good because it does not include file system cache and buffers.
3) Sort mem is a tricky affair. AFAIU, it is used only when you create index or
sort results of a query. If do these things seldomly, you can set this very low
or default. For individual session that creates index, you can set the sort
memory accordingly. Certainly in your case, number of requests per minute are
high but if you are not creating any index/sorting in each query, you can leave
the default as it is..
HTH
Bye
Shridhar
--
Another dream that failed. There's nothing sadder. -- Kirk, "This side of
Paradise", stardate 3417.3
Hello Shridhar,
thanks for your answer...
1) in the docs it says: shared_buffers should be 2*max_connections, min 16.
now, you suggest to put it to 500-600 MB, which means I will have to
increase shared_buffers to 68683 -- is this really correct? I mean,
RAM is allready now almost totally consumed.
2) the database has a size of 3.6 GB at the moment... about 100 user tables.
3) ok, I understand: I am not creating any indexes usually. Only once at night
all user indexes are dropped and recreated, I could imagine to increase the
sort_mem for this script... so sort_mem with 1024K is ok, or should it be
lowered to, say, 512K ?
--
Mit freundlichem Gru�
Henrik Steffen
Gesch�ftsf�hrer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com Tel. +49 4141 991230
mail: steffen@topconcepts.com Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------
----- Original Message -----
From: "Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>
To: <pgsql-general@postgresql.org>
Sent: Wednesday, November 13, 2002 8:53 AM
Subject: Re: [GENERAL] Upgrade to dual processor machine?
Show quoted text
On 13 Nov 2002 at 8:29, Henrik Steffen wrote:
Hello Josh!
This is was I figured out now:
1) RAM available: 1024 MB, there's nothing else but postgres on this
machine, so if I calculate 128 MB for Linux, there are 896 MB left
for Postgres.2) 70 % of 896 MB is 627 MB
Now, if I follow your instructions:
250K +
8.2K * 128 (shared_buffers) = 1049,6K +
14.2K * 64 (max_connections) = 908,8K +
1024K * 5000 (average number of requests per minute) = 5120000K
===============================================================
5122208.4K ==> 5002.16 MBthis is a little bit more than I have available, isn't it? :(((
Obviously tuning depends upon application and you have to set the threshold by
trial and error.I would suggest following from some recent discussions on such topics.
1)Set shared buffers somewhere between 500-600MB. Tha'ts going to be optimal
range for a Gig of RAM.2) How big you database is? How much of it you need it in memory at any given
time? You need to get these figures while setting shared buffers. But still 500-
600MB seems good because it does not include file system cache and buffers.3) Sort mem is a tricky affair. AFAIU, it is used only when you create index or
sort results of a query. If do these things seldomly, you can set this very low
or default. For individual session that creates index, you can set the sort
memory accordingly. Certainly in your case, number of requests per minute are
high but if you are not creating any index/sorting in each query, you can leave
the default as it is..HTH
Bye
Shridhar--
Another dream that failed. There's nothing sadder. -- Kirk, "This side of
Paradise", stardate 3417.3---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?