table overflow question

Started by Williams, Travis L, NPONSover 23 years ago4 messagesgeneral
Jump to latest

All,
I have a script that will fork off x number of process ... each one of these opens a connection to the db polls a different server and polls 200+ mibs. I am doing a update after every poll (so 1 poll (out of 200) then a update.. then another poll and update and so on till 200). So I have 5 db connections each doing a update to the same table but to different rows. Now this works ok. and it work ok if I go up to 10 connections at once.. but any more than that and I get the error "Can't Connect to DB: FATAL 1: cannot open pg_class: File table overflow" now is this to be expected.. or is there some performace tweak I can add.. btw I am running on a hpux 11.11 dual 550 512M.

Thanks,

Travis

#2Martijn van Oosterhout
kleptog@svana.org
In reply to: Williams, Travis L, NPONS (#1)
Re: table overflow question

On Fri, Oct 18, 2002 at 10:41:39PM -0400, Williams, Travis L, NPONS wrote:

All,

I have a script that will fork off x number of process ... each one
of these opens a connection to the db polls a different server and
polls 200+ mibs. I am doing a update after every poll (so 1 poll
(out of 200) then a update.. then another poll and update and so on
till 200). So I have 5 db connections each doing a update to the
same table but to different rows. Now this works ok. and it work
ok if I go up to 10 connections at once.. but any more than that and
I get the error "Can't Connect to DB: FATAL 1: cannot open pg_class:
File table overflow" now is this to be expected.. or is there some
performace tweak I can add.. btw I am running on a hpux 11.11 dual
550 512M.

Looks like you've run into a open file limit. If you're using linux you
should look in /proc/sys/fs to make sure you can actually open the number of
files you need. You should estimate at least 40 files per server.

I think file-max is the one you want.

--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

Show quoted text

There are 10 kinds of people in the world, those that can do binary
arithmetic and those that can't.

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Martijn van Oosterhout (#2)
Re: table overflow question

Martijn van Oosterhout <kleptog@svana.org> writes:

Looks like you've run into a open file limit. If you're using linux you
should look in /proc/sys/fs to make sure you can actually open the number of
files you need. You should estimate at least 40 files per server.
I think file-max is the one you want.

He said he was using HPUX. On HPUX 10.20, the kernel parameters NFILE
and NINODE would be the things to bump up; I suspect 11 is the same.

The other direction to attack it from is to reduce PG's parameter
MAX_FILES_PER_PROCESS, but if you have to set that lower than 100
or so then you'd be better advised to fix the kernel.

regards, tom lane

In reply to: Tom Lane (#3)
Re: table overflow question

Thanks all,
I went reading about the nfile and it does look like it is the
problem.. I could rework the script to just dump to a flat file (which I
had in the past) and have another script read it in after the fact with
only one db connection.. but that's kind of inefficient..

Travis

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Friday, October 18, 2002 10:42 PM
To: Martijn van Oosterhout
Cc: Williams, Travis L, NPONS; pgsql-general@postgresql.org
Subject: Re: [GENERAL] table overflow question

Martijn van Oosterhout <kleptog@svana.org> writes:

Looks like you've run into a open file limit. If you're using linux

you

should look in /proc/sys/fs to make sure you can actually open the

number of

files you need. You should estimate at least 40 files per server.
I think file-max is the one you want.

He said he was using HPUX. On HPUX 10.20, the kernel parameters NFILE
and NINODE would be the things to bump up; I suspect 11 is the same.

The other direction to attack it from is to reduce PG's parameter
MAX_FILES_PER_PROCESS, but if you have to set that lower than 100
or so then you'd be better advised to fix the kernel.

regards, tom lane