idle connection timeout ...

Started by The Hermit Hackerover 23 years ago38 messageshackers
Jump to latest
#1The Hermit Hacker
scrappy@hub.org

g'day ...

just went through the new config files for v7.3, to make sure, but
it doens't look like we have such ... has anyone looked at adding a 'idle
timeout' for a postgres process? Or am I missing something in the docs?

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#1)
Re: idle connection timeout ...

"Marc G. Fournier" <scrappy@hub.org> writes:

just went through the new config files for v7.3, to make sure, but
it doens't look like we have such ... has anyone looked at adding a 'idle
timeout' for a postgres process? Or am I missing something in the docs?

Are you looking for the backend to arbitrarily disconnect from a client
that hasn't done anything in X amount of time? Seems to me that has
been proposed and rejected, more than once.

We already have logic that checks for loss of connectivity (see TCP
keepalive option). If the client is *still there*, but has just not
chosen to issue any commands lately, I have a very hard time buying
any argument that it is the backend's province to abort the connection.
That's a recipe for degrading reliability, not improving it.

regards, tom lane

#3The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#2)
Re: idle connection timeout ...

On Thu, 24 Oct 2002, Tom Lane wrote:

"Marc G. Fournier" <scrappy@hub.org> writes:

just went through the new config files for v7.3, to make sure, but
it doens't look like we have such ... has anyone looked at adding a 'idle
timeout' for a postgres process? Or am I missing something in the docs?

Are you looking for the backend to arbitrarily disconnect from a client
that hasn't done anything in X amount of time? Seems to me that has
been proposed and rejected, more than once.

We already have logic that checks for loss of connectivity (see TCP
keepalive option). If the client is *still there*, but has just not
chosen to issue any commands lately, I have a very hard time buying
any argument that it is the backend's province to abort the connection.
That's a recipe for degrading reliability, not improving it.

Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...

Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?

#4Greg Copeland
greg@CopelandConsulting.Net
In reply to: The Hermit Hacker (#3)
Re: idle connection timeout ...

On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:

Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...

Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?

Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.

Greg

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: The Hermit Hacker (#3)
Re: idle connection timeout ...

"Marc G. Fournier" <scrappy@hub.org> writes:

Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?

It would be better/easier to apply the check later on, when a backend is
adding itself to the PGPROC array. It'd be easy enough to count the
number of other backends showing the same DB OID in their PGPROC
entries, and reject if too many.

regards, tom lane

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Greg Copeland (#4)
Re: idle connection timeout ...

Greg Copeland <greg@CopelandConsulting.Net> writes:

On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:

Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...

Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?

Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

regards, tom lane

#7Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#6)
Re: idle connection timeout ...

Tom Lane wrote:

Greg Copeland <greg@CopelandConsulting.Net> writes:

On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:

Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...

Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?

Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#7)
Re: idle connection timeout ...

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

It occurs to me that a per-user connection limit is going to be the next
thing he asks for ;-). We could implement that too, if we wanted.
(Not sure whether PGPROC stores the user id, but it easily could.)

regards, tom lane

#9Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#8)
Re: idle connection timeout ...

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.
I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#9)
Re: idle connection timeout ...

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.

Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.

regards, tom lane

#11Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#10)
Re: idle connection timeout ...

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.

Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.

The only way to do it would be, after a few hits of the limit, to start
delaying the connection rejections so you don't get hammered. It could
be done, but even then, I am not sure if it would be optimal.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#12Michael Paesold
mpaesold@gmx.at
In reply to: Bruce Momjian (#9)
Re: idle connection timeout ...

Tom Lane <tgl@sss.pgh.pa.us> wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.

Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.

What about a shared database server, where you want to have resource
limits for each database/user?
Could be usefull in such a case, even if it is not very efficient, it
would be the only way. As dba you need not have control over the
client apps.

Just a thought.

Regards,
Michael

#13The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#8)
Re: idle connection timeout ...

On Fri, 25 Oct 2002, Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

It occurs to me that a per-user connection limit is going to be the next
thing he asks for ;-)

Actually, sounds like a good idea, but have been trying to leave (and
move) multiple client auth to be within the database/application itself
...

#14The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#9)
Re: idle connection timeout ...

On Fri, 25 Oct 2002, Bruce Momjian wrote:

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.

Okay, but also bear in mind that alot of the time, when I'm bringign up
stuff like this, I'm coming from the "ISP" perspective ... if I have one
client that is using up all 512 connections on the server, none of my
other clients are getting any connections ...

Yes, the client should have tested his code better, but I want to be able
to put 'limits' to make it so that everyone isn't affected by ones mistake
...

I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.

Nope, the per-db limit is there to try and eliminate the impact of one
client/application from essentially creating a DoS for all other
database/clients ...

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.

Agreed, but unless you have control over both the client and server sides,
its not possible ...

#15Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#14)
Re: idle connection timeout ...

Yes, my comments related to useing db/user limits to control the number
of persistent connections. From an ISP perspective, I can see value in
user/db limits.

---------------------------------------------------------------------------

Marc G. Fournier wrote:

On Fri, 25 Oct 2002, Bruce Momjian wrote:

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.

Okay, but also bear in mind that alot of the time, when I'm bringign up
stuff like this, I'm coming from the "ISP" perspective ... if I have one
client that is using up all 512 connections on the server, none of my
other clients are getting any connections ...

Yes, the client should have tested his code better, but I want to be able
to put 'limits' to make it so that everyone isn't affected by ones mistake
...

I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.

Nope, the per-db limit is there to try and eliminate the impact of one
client/application from essentially creating a DoS for all other
database/clients ...

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.

Agreed, but unless you have control over both the client and server sides,
its not possible ...

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#16The Hermit Hacker
scrappy@hub.org
In reply to: Tom Lane (#10)
Re: idle connection timeout ...

On Fri, 25 Oct 2002, Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.

Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.

As mentioned in my response to Bruce ... in an ISP situation, a DoS attack
against the database by a single client can be very easy to accomplish in
our current situation ... all I need to do is setup a perl script that
opens all the connections I can to the database I have access to until all
are used up, and nobody else has access to *their* databases ...

#17The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#11)
Re: idle connection timeout ...

On Fri, 25 Oct 2002, Bruce Momjian wrote:

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.

Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.

The only way to do it would be, after a few hits of the limit, to start
delaying the connection rejections so you don't get hammered. It could
be done, but even then, I am not sure if it would be optimal.

Note that I don't believe there is an "optimal solution" for this ... but
in an environment where there are several clients connecting to several
different databases, the ability for one client to starve out the others
is actually very real ...

#18Andrew Sullivan
andrew@libertyrms.info
In reply to: Tom Lane (#8)
Re: idle connection timeout ...

On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.

A

-- 
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110
#19Bruce Momjian
bruce@momjian.us
In reply to: Andrew Sullivan (#18)
Re: idle connection timeout ...

Andrew Sullivan wrote:

On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:

So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.

That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.

Added to TODO:

* Allow limits on per-db/user connections

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#20Mike Benoit
mikeb@netnation.com
In reply to: Bruce Momjian (#15)
Re: idle connection timeout ...

On Fri, 2002-10-25 at 10:31, Bruce Momjian wrote:

Yes, my comments related to useing db/user limits to control the number
of persistent connections. From an ISP perspective, I can see value in
user/db limits.

Yes, this would be amazingly useful. I work for a web hosting provider
and it happens all too often where a single customer creates a flawed
script which consumes all the DB connections. Obviously denying access
to the rest of our customers.

Being able to set this per DB connection limit in Postgres itself
without having to restart the backend would also make this feature very
nice.

--
Best Regards,

Mike Benoit
NetNation Communication Inc.
Systems Engineer
Tel: 604-684-6892 or 888-983-6600
---------------------------------------

Disclaimer: Opinions expressed here are my own and not
necessarily those of my employer

#21Mike Mascari
mascarm@mascari.com
In reply to: Bruce Momjian (#19)
#22Bruce Momjian
bruce@momjian.us
In reply to: Mike Mascari (#21)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Mike Mascari (#21)
#24Robert Treat
xzilla@users.sourceforge.net
In reply to: Tom Lane (#23)
#25Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Treat (#24)
#26Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#25)
#27Robert Treat
xzilla@users.sourceforge.net
In reply to: Bruce Momjian (#26)
#28Bruce Momjian
bruce@momjian.us
In reply to: Robert Treat (#27)
#29Bruno Wolff III
bruno@wolff.to
In reply to: Bruce Momjian (#28)
#30Robert Treat
xzilla@users.sourceforge.net
In reply to: Bruce Momjian (#28)
#31Bruce Momjian
bruce@momjian.us
In reply to: Bruno Wolff III (#29)
#32The Hermit Hacker
scrappy@hub.org
In reply to: Bruce Momjian (#22)
#33Bruce Momjian
bruce@momjian.us
In reply to: The Hermit Hacker (#32)
#34Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Bruce Momjian (#33)
#35Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#34)
#36Alvaro Herrera
alvherre@dcc.uchile.cl
In reply to: Bruce Momjian (#35)
#37Karel Zak
zakkr@zf.jcu.cz
In reply to: Mike Mascari (#21)
#38Mike Mascari
mascarm@mascari.com
In reply to: Bruce Momjian (#19)