Optimal configuration to eliminate "out of file descriptors" error
I'm trying to figure out what the optimal Postgres configuration would
be for my server (with 200 connecting clients, even though I'd really
like to get it up to 500).
I've got a 700 MHz eMac running Mac OS 10.3.2 (Panther) with 512 MB of
RAM. I've messed around with some settings but I'm still getting an
occasional "out of file descriptor" error, especially when performing a
VACUUM. Like so...
2004-04-13 23:30:05 LOG: out of file descriptors: Too many open files;
release and retry
CONTEXT: writing block 1 of relation 67553/16604
I'm going to do my best to provide my current system settings that
relate to Postgres. It would be great if someone could tell me where
I'm way off, and get me on the right track. I'm under the impression
that my machine should be able to handle 200 to 500 client connections.
If that's not the case, I'm fine with getting new hardware, I just
don't want to go to that step "willy nilly". Thanks!
1. Snipped from postgresql.conf (the only three settings I've changed)
max_connections = 200
...
shared_buffers = 2000
...
max_files_per_process = 100
2. Snipped from /etc/profile
ulimit -u 512
3. Snipped from /etc/rc
sysctl -w kern.sysv.shmmax=167772160
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=8
sysctl -w kern.sysv.shmall=65536
4. Snipped from etc/sysctl.conf
# Turn up maxproc
kern.maxproc=2048
# Turn up the maxproc per user
kern.maxprocperuid=512
I have not received a response yet on this. Should I try another
postgres list or do I need to provide more information/clarity? Thanks.
On Apr 14, 2004, at 12:48 PM, Joe Lester wrote:
Show quoted text
I'm trying to figure out what the optimal Postgres configuration would
be for my server (with 200 connecting clients, even though I'd really
like to get it up to 500).I've got a 700 MHz eMac running Mac OS 10.3.2 (Panther) with 512 MB of
RAM. I've messed around with some settings but I'm still getting an
occasional "out of file descriptor" error, especially when performing
a VACUUM. Like so...2004-04-13 23:30:05 LOG: out of file descriptors: Too many open
files; release and retry
CONTEXT: writing block 1 of relation 67553/16604I'm going to do my best to provide my current system settings that
relate to Postgres. It would be great if someone could tell me where
I'm way off, and get me on the right track. I'm under the impression
that my machine should be able to handle 200 to 500 client
connections. If that's not the case, I'm fine with getting new
hardware, I just don't want to go to that step "willy nilly". Thanks!1. Snipped from postgresql.conf (the only three settings I've changed)
max_connections = 200
...
shared_buffers = 2000
...
max_files_per_process = 1002. Snipped from /etc/profile
ulimit -u 512
3. Snipped from /etc/rc
sysctl -w kern.sysv.shmmax=167772160
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=8
sysctl -w kern.sysv.shmall=655364. Snipped from etc/sysctl.conf
# Turn up maxproc
kern.maxproc=2048# Turn up the maxproc per user
kern.maxprocperuid=512
On Thu, Apr 15, 2004 at 13:27:27 -0500,
Joe Lester <joe_lester@sweetwater.com> wrote:
I have not received a response yet on this. Should I try another
postgres list or do I need to provide more information/clarity? Thanks.
The performance list would be the natural place for getting information
on optimal configurations. It sounds like what is really happening is that
you are hitting an OS limit on the number of open files. You should be
able to increase that limit. There have also been some discussions about
postgres doing a better job of telling when it has opened too many files
within the last several months. I don't remember much about the details
of the change or which version they were applied to.
Show quoted text
On Apr 14, 2004, at 12:48 PM, Joe Lester wrote:
I'm trying to figure out what the optimal Postgres configuration would
be for my server (with 200 connecting clients, even though I'd really
like to get it up to 500).I've got a 700 MHz eMac running Mac OS 10.3.2 (Panther) with 512 MB of
RAM. I've messed around with some settings but I'm still getting an
occasional "out of file descriptor" error, especially when performing
a VACUUM. Like so...2004-04-13 23:30:05 LOG: out of file descriptors: Too many open
files; release and retry
CONTEXT: writing block 1 of relation 67553/16604I'm going to do my best to provide my current system settings that
relate to Postgres. It would be great if someone could tell me where
I'm way off, and get me on the right track. I'm under the impression
that my machine should be able to handle 200 to 500 client
connections. If that's not the case, I'm fine with getting new
hardware, I just don't want to go to that step "willy nilly". Thanks!1. Snipped from postgresql.conf (the only three settings I've changed)
max_connections = 200
...
shared_buffers = 2000
...
max_files_per_process = 1002. Snipped from /etc/profile
ulimit -u 512
3. Snipped from /etc/rc
sysctl -w kern.sysv.shmmax=167772160
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=8
sysctl -w kern.sysv.shmall=655364. Snipped from etc/sysctl.conf
# Turn up maxproc
kern.maxproc=2048# Turn up the maxproc per user
kern.maxprocperuid=512
Bruno Wolff III <bruno@wolff.to> writes:
It sounds like what is really happening is that
you are hitting an OS limit on the number of open files. You should be
able to increase that limit. There have also been some discussions about
postgres doing a better job of telling when it has opened too many files
within the last several months. I don't remember much about the details
of the change or which version they were applied to.
If I recall that change correctly, it was prompted by the discovery that
on OS X we were drastically underestimating the number of open file
descriptors sucked up per backend. (OS X treats each semaphore as an
open file, so there are about max_connections open files per process
that we weren't accounting for.) I think it is just in CVS tip and not
yet in any released version.
For the moment the answer is to size your kernel file table on the
assumption that you need about max_connections * (max_files_per_process
+ max_connections) filetable slots just for Postgres, plus whatever you
want available for the rest of the system.
regards, tom lane
Yeah. It was my shell that was the bottleneck. What did the trick was
adding this line in /etc/profile:
ulimit -n 8000
Thanks!
Show quoted text
Bruno Wolff III <bruno@wolff.to> writes:
It sounds like what is really happening is that
you are hitting an OS limit on the number of open files. You should be
able to increase that limit. There have also been some discussions
about
postgres doing a better job of telling when it has opened too many
files
within the last several months. I don't remember much about the
details
of the change or which version they were applied to.