large number of files open...

Started by Thomas F. O'Connellabout 24 years ago6 messagesgeneral
Jump to latest
#1Thomas F. O'Connell
tfo@monsterlabs.com

i'm running postgres 7.1.3 in a production environment. the database
itself contains on the order of 100 tables, including some complex
triggers, functions, and views. a few tables (on the order of 10) that
are frequently accessed have on the order of 100,000 rows.

every now and then, traffic on the server, which is accessed publicly
via mod_perl (Apache::DBI) causes the machine itself to hit the kernel
hard limit of number of files open: 8191.

this, unfortunately, crashes the machine. in a production environment of
this magnitude, is that a reasonable number of files to expect postgres
to need at any given time? is there any documentation anywhere on what
the number of open files depends on?

-tfo

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas F. O'Connell (#1)
Re: large number of files open...

"Thomas F. O'Connell" <tfo@monsterlabs.com> writes:

i'm running postgres 7.1.3 in a production environment. [snip]
every now and then, traffic on the server, which is accessed publicly
via mod_perl (Apache::DBI) causes the machine itself to hit the kernel
hard limit of number of files open: 8191.

What OS is this?

You can reconfigure the kernel filetable larger in all Unixen that I
know of, but it's more painful in some than others. Unfortunately,
some systems' sysconf() reports a larger _SC_OPEN_MAX value than the
kernel can realistically support over a large number of processes.

this, unfortunately, crashes the machine. in a production environment of
this magnitude, is that a reasonable number of files to expect postgres
to need at any given time? is there any documentation anywhere on what
the number of open files depends on?

If left alone, Postgres could conceivably open every file in your
database in each backend process. There is a per-backend limit on
number of open files, but it's taken from the aforesaid sysconf()
result; if your kernel reports an overly large sysconf(_SC_OPEN_MAX)
then you *will* have trouble.

In 7.2 there is a config parameter max_files_per_process that can be
set to limit the per-backend file usage to something less than what
sysconf claims. This does not exist in 7.1, but you could hack up
pg_nofile() in src/backend/storage/file/fd.c to enforce a suitable
limit.

In any case you probably don't want to set the per-backend limit much
less than maybe 40-50 files. If that times the allowed number of
backends is more than, or even real close to, your kernel filetable
size, you'd best increase the filetable size.

regards, tom lane

#3Neil Conway
neilc@samurai.com
In reply to: Tom Lane (#2)
Re: large number of files open...

On Wed, 2002-01-16 at 16:24, Tom Lane wrote:

In any case you probably don't want to set the per-backend limit much
less than maybe 40-50 files. If that times the allowed number of
backends is more than, or even real close to, your kernel filetable
size, you'd best increase the filetable size.

What are the implications of raising this limit (on a typical UNIX
variant, such as Linux 2.4 or FreeBSD)?

Just curious...

TIA

--
Neil Conway <neilconway@rogers.com>
PGP Key ID: DB3C29FC

#4Steve Wolfe
steve@iboats.com
In reply to: Thomas F. O'Connell (#1)
Re: large number of files open...

i'm running postgres 7.1.3 in a production environment. the database
itself contains on the order of 100 tables, including some complex
triggers, functions, and views. a few tables (on the order of 10) that
are frequently accessed have on the order of 100,000 rows.

every now and then, traffic on the server, which is accessed publicly
via mod_perl (Apache::DBI) causes the machine itself to hit the kernel
hard limit of number of files open: 8191.

this, unfortunately, crashes the machine. in a production environment of
this magnitude, is that a reasonable number of files to expect postgres
to need at any given time? is there any documentation anywhere on what
the number of open files depends on?

My first recommendation would be to run Postgres on a seperate machine
if it's being hit that hard, but hey, maybe you just don't feel like it.
; )

Our web servers handle a very large number of virtuals domains, so they
open up a *lot* of log files, and have (at times) hit the same problem
you're running into. It used to be necessary to recompile the kernel to
raise the limits, but that ain't so any more, luckily. With 2.4 kernels,
you can do something like this:

echo '16384' > /proc/sys/fs/file-max
echo '65536' > /proc/sys/fs/inode-max

or, in /etc/sysctl.conf,

fs.file-max = 16384
fs.inode-max = 65536

then, /sbin/sysctl -p

Remember that inode-max needs to be at least twice file-max, and if I
recall, at least three times higher is recommended.

steve

#5Joseph Shraibman
jks@selectacast.net
In reply to: Thomas F. O'Connell (#1)
Re: large number of files open...

If this is a linux system
echo 16384 > /proc/sys/fs/file-max

Thomas F. O'Connell wrote:

i'm running postgres 7.1.3 in a production environment. the database
itself contains on the order of 100 tables, including some complex
triggers, functions, and views. a few tables (on the order of 10) that
are frequently accessed have on the order of 100,000 rows.

every now and then, traffic on the server, which is accessed publicly
via mod_perl (Apache::DBI) causes the machine itself to hit the kernel
hard limit of number of files open: 8191.

this, unfortunately, crashes the machine. in a production environment
of this magnitude, is that a reasonable number of files to expect
postgres to need at any given time? is there any documentation
anywhere on what the number of open files depends on?

-tfo

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://xis.xtenit.com

#6Justin Clift
justin@postgresql.org
In reply to: Thomas F. O'Connell (#1)
Re: large number of files open...

Hi Steve,

Steve Wolfe wrote:
<snip>

Our web servers handle a very large number of virtuals domains, so they
open up a *lot* of log files, and have (at times) hit the same problem
you're running into. It used to be necessary to recompile the kernel to
raise the limits, but that ain't so any more, luckily. With 2.4 kernels,
you can do something like this:

It might be worthwhile taking a look at the program which Matthew
Hagerty wrote called "pgLOGd".

It's designed to get around the need to open heaps of log files, instead
it pipes all the log entries to a single daemon which logs the entries
into a database for later processing.

Not sure how it would work in a virtual domain environment though. :)

http://www.digitalstratum.com/pglogd/

Regards and best wishes,

Justin Clift

echo '16384' > /proc/sys/fs/file-max
echo '65536' > /proc/sys/fs/inode-max

or, in /etc/sysctl.conf,

fs.file-max = 16384
fs.inode-max = 65536

then, /sbin/sysctl -p

Remember that inode-max needs to be at least twice file-max, and if I
recall, at least three times higher is recommended.

steve

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi