PHP + Postgres: More than 1000 postmasters produce 70.000 context switches

Started by Gellert, Andreabout 22 years ago9 messagesgeneral
Jump to latest
#1Gellert, Andre
AGellert@ElectronicPartner.de

Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
process.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

My problem is, that our apache produces much more than 800 open connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300

We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?

thanks a lot for help and every idea is welcome,
Andre

BTW: Does anyone know commercial administration trainings in Germany, near
Duesseldorf?

#2Richard Huxton
dev@archonet.com
In reply to: Gellert, Andre (#1)
Re: PHP + Postgres: More than 1000 postmasters produce 70.000 context switches

On Friday 20 February 2004 15:32, Gellert, Andre wrote:

Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb)
per process.

Actually it's probably 8kB each = 16MB, but thats between *all* the backends.
You probably want something a fair bit larger than this. Go to
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
and read the section on performance tuning and on the annotated
postgresql.conf

We corrected higher values from sort_mem and vacuum_mem back to
sort_mem=512 and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

You can probably put vaccum_mem back up.

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

Might be your shared_buffers being too low, but we'll let someone else
comment.

My problem is, that our apache produces much more than 800 open
connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300

You do want to limit the MaxRequestsPerChild if you're using persistent
connections. The problem seems to be with your PHP though

We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1

So - you let PHP open persistent connections to PG and have no limit to the
number of different connections open at any one time?
Turn the persistent connections off - you'll probably find your problems go
away.

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?

You probably can pool your connections better, but difficult to say without
knowing what your PHP is doing.

--
Richard Huxton
Archonet Ltd

#3Csaba Nagy
nagy@ecircle-ag.com
In reply to: Gellert, Andre (#1)
Re: PHP + Postgres: More than 1000 postmasters produce

Well, it seems for your application is better to limit php's persistent
connection pool as a quick measure.
Try to set these values to something sensible for you:

; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = 20
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = 30

Or just disable persistent connections altogether, and see if that is
not resulting in better performance:

; Allow or prevent persistent links.
pgsql.allow_persistent = Off

In the long term look for some better connection pooling mechanism, I'm
sure you'll find something for PHP too (I'm not using php, maybe
somebody else on the list can help ?).

Cheers,
Csaba.

Show quoted text

On Fri, 2004-02-20 at 16:32, Gellert, Andre wrote:

Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
process.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

My problem is, that our apache produces much more than 800 open connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300

We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?

thanks a lot for help and every idea is welcome,
Andre

BTW: Does anyone know commercial administration trainings in Germany, near
Duesseldorf?

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#4scott.marlowe
scott.marlowe@ihs.com
In reply to: Gellert, Andre (#1)
Re: PHP + Postgres: More than 1000 postmasters produce

Have you tested it with regular pg_connects instead of pg_pconnect? while
many people expect pconnects to be faster, often, when they result in the
database having lots of open idle connections, they actually make the
system slower than just using plain connects.

You might want to look into some of the connection pooling options out
there that work with PHP, as persistant connections work well only for a
smaller number of hard working threads, and not so well for a large number
of connections of which only a few are actually hitting the db at the
same time. The becomes especially bad in your situation, where it sounds
like you have multiple databases to connect to, so php is keeping multiple
backends alive for each front end thread.

#5scott.marlowe
scott.marlowe@ihs.com
In reply to: Csaba Nagy (#3)
Re: PHP + Postgres: More than 1000 postmasters produce

On 20 Feb 2004, Csaba Nagy wrote:

Well, it seems for your application is better to limit php's persistent
connection pool as a quick measure.
Try to set these values to something sensible for you:

; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = 20

Please note that pgsql.max_persistant is PER apache / php backend process.

http://www.php.net/manual/en/ref.pgsql.php
QUOTE:
pgsql.max_persistent integer

The maximum number of persistent Postgres connections per process.
UNQUOTE:

; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = 30

This one too is per process

Or just disable persistent connections altogether, and see if that is
not resulting in better performance:

My recommendation.

#6Gellert, Andre
AGellert@ElectronicPartner.de
In reply to: scott.marlowe (#5)
Re: PHP + Postgres: More than 1000 postmasters produce

Hi Richard,

Actually it's probably 8kB each = 16MB, but thats between
*all* the backends.
You probably want something a fair bit larger than this. Go to
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
and read the section on performance tuning and on the annotated
postgresql.conf

I know this page and I corrected the values according to these hints,
but because of the immense problems ,i set them back to the same
values we use on the old servers.

You can probably put vaccum_mem back up.

Because it is not rush hour now and i am leaving office for 3 days, I do
this on tuesday , to have a chance to roll back if errors occur.

So - you let PHP open persistent connections to PG and have
no limit to the
number of different connections open at any one time?
Turn the persistent connections off - you'll probably find
your problems go away.

I have done this before, but when i remember right, the only effect is,
that every second dozens of postmaster processes started and closed, because
the connection is thrown away. This helps for now, as i can see on "low
traffic" , but when I tried first, we had heavy load just by starting this
large number of processes. There are 5-10 php skripts running per second, in
peeks maybe even twice or more.
This is a problem for the system , when for every process postmaster must be
started, or am I wrong ?

bye
Andre

#7scott.marlowe
scott.marlowe@ihs.com
In reply to: Gellert, Andre (#6)
Re: PHP + Postgres: More than 1000 postmasters produce

On Fri, 20 Feb 2004, Gellert, Andre wrote:

I have done this before, but when i remember right, the only effect is,
that every second dozens of postmaster processes started and closed, because
the connection is thrown away. This helps for now, as i can see on "low
traffic" , but when I tried first, we had heavy load just by starting this
large number of processes. There are 5-10 php skripts running per second, in
peeks maybe even twice or more.
This is a problem for the system , when for every process postmaster must be
started, or am I wrong ?

Actually, most of my scripts spend a LOT more time running PHP and queries
than they do initiating connections. If you get a chance, profile your
code (microtime() is useful for this) to see where the time is being
spent. If you see the pg_connect time climbing non-linearly with load,
then you may need to use pg_pconnect. If the time is climbing linearly
with load and is only a tiny fraction of the amount of time it takes to
run the script, then don't worry about it.

#8Gavin M. Roy
gmr@ehpg.net
In reply to: Gellert, Andre (#6)
Re: PHP + Postgres: More than 1000 postmasters produce

your pgsql backend must be able to accommodate the max persistent
connections * the max number of apache daemons.

the php pgsql.max_persistent setting is per apache daemon.

HTH,

Gavin

Gellert, Andre wrote:

Show quoted text

Hi Richard,

Actually it's probably 8kB each = 16MB, but thats between
*all* the backends.
You probably want something a fair bit larger than this. Go to
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
and read the section on performance tuning and on the annotated
postgresql.conf

I know this page and I corrected the values according to these hints,
but because of the immense problems ,i set them back to the same
values we use on the old servers.

You can probably put vaccum_mem back up.

Because it is not rush hour now and i am leaving office for 3 days, I do
this on tuesday , to have a chance to roll back if errors occur.

So - you let PHP open persistent connections to PG and have
no limit to the
number of different connections open at any one time?
Turn the persistent connections off - you'll probably find
your problems go away.

I have done this before, but when i remember right, the only effect is,
that every second dozens of postmaster processes started and closed, because
the connection is thrown away. This helps for now, as i can see on "low
traffic" , but when I tried first, we had heavy load just by starting this
large number of processes. There are 5-10 php skripts running per second, in
peeks maybe even twice or more.
This is a problem for the system , when for every process postmaster must be
started, or am I wrong ?

bye
Andre

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

#9Gellert, Andre
AGellert@ElectronicPartner.de
In reply to: Gavin M. Roy (#8)
Re: PHP + Postgres: More than 1000 postmasters produce

Hi to all,
those helped or enjoyedmy english. I waited over the day to give a summary.

your pgsql backend must be able to accommodate the max persistent
connections * the max number of apache daemons.

the php pgsql.max_persistent setting is per apache daemon.

All of your answers where correct, i switched from "keep all persistant" to
"throw all away"
and the new hardware behaves very harmful. ;-)
Before this, on older hardware with apache on the same machine, it was a bad
idea not to use persistant links,
but with the "only-Db-Server" it runs very good.

I reconfigured my apache-machines back and now over the whole day i had load
< 2 , mostly < 1,2 , even although I was running several maintenance jobs on
the machine.

I guess, that I have had reached the limits for parallel postgres's (1.000)
on the hardware and the concept of so many databases (> 10) with a lot of
apache instances (> 100) running is not to be combined with connection
sharing.

Now i use following main options:
~~~~
vacuum_mem = 8192
sort_mem = 8192 # 8 MB
max_connections = 800 # even 20 will be enough
shared_buffers = 8192 # 64mb , for 1 GB RAM, total sum for all processes
~~~~
kernel.shmall = 1342177280
kernel.shmmax = 1342177280
~~~~

Thanx for hints,
Andre