idle connection timeout ...
g'day ...
just went through the new config files for v7.3, to make sure, but
it doens't look like we have such ... has anyone looked at adding a 'idle
timeout' for a postgres process? Or am I missing something in the docs?
"Marc G. Fournier" <scrappy@hub.org> writes:
just went through the new config files for v7.3, to make sure, but
it doens't look like we have such ... has anyone looked at adding a 'idle
timeout' for a postgres process? Or am I missing something in the docs?
Are you looking for the backend to arbitrarily disconnect from a client
that hasn't done anything in X amount of time? Seems to me that has
been proposed and rejected, more than once.
We already have logic that checks for loss of connectivity (see TCP
keepalive option). If the client is *still there*, but has just not
chosen to issue any commands lately, I have a very hard time buying
any argument that it is the backend's province to abort the connection.
That's a recipe for degrading reliability, not improving it.
regards, tom lane
On Thu, 24 Oct 2002, Tom Lane wrote:
"Marc G. Fournier" <scrappy@hub.org> writes:
just went through the new config files for v7.3, to make sure, but
it doens't look like we have such ... has anyone looked at adding a 'idle
timeout' for a postgres process? Or am I missing something in the docs?Are you looking for the backend to arbitrarily disconnect from a client
that hasn't done anything in X amount of time? Seems to me that has
been proposed and rejected, more than once.We already have logic that checks for loss of connectivity (see TCP
keepalive option). If the client is *still there*, but has just not
chosen to issue any commands lately, I have a very hard time buying
any argument that it is the backend's province to abort the connection.
That's a recipe for degrading reliability, not improving it.
Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...
Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?
On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:
Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?
Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.
Greg
"Marc G. Fournier" <scrappy@hub.org> writes:
Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?
It would be better/easier to apply the check later on, when a backend is
adding itself to the PGPROC array. It'd be easy enough to count the
number of other backends showing the same DB OID in their PGPROC
entries, and reject if too many.
regards, tom lane
Greg Copeland <greg@CopelandConsulting.Net> writes:
On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:
Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?
Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.
regards, tom lane
Tom Lane wrote:
Greg Copeland <greg@CopelandConsulting.Net> writes:
On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:
Ya, I've thought that one through ... I think what I'm more looking at is
some way of 'limiting' persistent connections, where a server opens n
connections during a spike, which then sit idle indefinitely since it was
one fo those 'slashdot effect' kinda spikes ...Is there any way of the 'master process' *safely/accurately* knowing,
through the shared memory link, the # of connections currently open to a
particular database? So that a limit could be set on a per db basis, say
as an additional arg to pg_hba.conf?Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.
But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.
But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.
It occurs to me that a per-user connection limit is going to be the next
thing he asks for ;-). We could implement that too, if we wanted.
(Not sure whether PGPROC stores the user id, but it easily could.)
regards, tom lane
Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.
What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.
I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.
Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.
regards, tom lane
Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.
The only way to do it would be, after a few hits of the limit, to start
delaying the connection rejections so you don't get hammered. It could
be done, but even then, I am not sure if it would be optimal.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
Tom Lane <tgl@sss.pgh.pa.us> wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.
What about a shared database server, where you want to have resource
limits for each database/user?
Could be usefull in such a case, even if it is not very efficient, it
would be the only way. As dba you need not have control over the
client apps.
Just a thought.
Regards,
Michael
On Fri, 25 Oct 2002, Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.It occurs to me that a per-user connection limit is going to be the next
thing he asks for ;-)
Actually, sounds like a good idea, but have been trying to leave (and
move) multiple client auth to be within the database/application itself
...
On Fri, 25 Oct 2002, Bruce Momjian wrote:
Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.
Okay, but also bear in mind that alot of the time, when I'm bringign up
stuff like this, I'm coming from the "ISP" perspective ... if I have one
client that is using up all 512 connections on the server, none of my
other clients are getting any connections ...
Yes, the client should have tested his code better, but I want to be able
to put 'limits' to make it so that everyone isn't affected by ones mistake
...
I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.
Nope, the per-db limit is there to try and eliminate the impact of one
client/application from essentially creating a DoS for all other
database/clients ...
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.
Agreed, but unless you have control over both the client and server sides,
its not possible ...
Yes, my comments related to useing db/user limits to control the number
of persistent connections. From an ISP perspective, I can see value in
user/db limits.
---------------------------------------------------------------------------
Marc G. Fournier wrote:
On Fri, 25 Oct 2002, Bruce Momjian wrote:
Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.Okay, but also bear in mind that alot of the time, when I'm bringign up
stuff like this, I'm coming from the "ISP" perspective ... if I have one
client that is using up all 512 connections on the server, none of my
other clients are getting any connections ...Yes, the client should have tested his code better, but I want to be able
to put 'limits' to make it so that everyone isn't affected by ones mistake
...I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.Nope, the per-db limit is there to try and eliminate the impact of one
client/application from essentially creating a DoS for all other
database/clients ...Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.Agreed, but unless you have control over both the client and server sides,
its not possible ...---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Fri, 25 Oct 2002, Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.
As mentioned in my response to Bruce ... in an ISP situation, a DoS attack
against the database by a single client can be very easy to accomplish in
our current situation ... all I need to do is setup a perl script that
opens all the connections I can to the database I have access to until all
are used up, and nobody else has access to *their* databases ...
On Fri, 25 Oct 2002, Bruce Momjian wrote:
Tom Lane wrote:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends.Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.The only way to do it would be, after a few hits of the limit, to start
delaying the connection rejections so you don't get hammered. It could
be done, but even then, I am not sure if it would be optimal.
Note that I don't believe there is an "optimal solution" for this ... but
in an environment where there are several clients connecting to several
different databases, the ability for one client to starve out the others
is actually very real ...
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.
That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.
A
--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
Andrew Sullivan wrote:
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.
Added to TODO:
* Allow limits on per-db/user connections
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Fri, 2002-10-25 at 10:31, Bruce Momjian wrote:
Yes, my comments related to useing db/user limits to control the number
of persistent connections. From an ISP perspective, I can see value in
user/db limits.
Yes, this would be amazingly useful. I work for a web hosting provider
and it happens all too often where a single customer creates a flawed
script which consumes all the DB connections. Obviously denying access
to the rest of our customers.
Being able to set this per DB connection limit in Postgres itself
without having to restart the backend would also make this feature very
nice.
--
Best Regards,
Mike Benoit
NetNation Communication Inc.
Systems Engineer
Tel: 604-684-6892 or 888-983-6600
---------------------------------------
Disclaimer: Opinions expressed here are my own and not
necessarily those of my employer
Bruce Momjian wrote:
Andrew Sullivan wrote:
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.Added to TODO:
* Allow limits on per-db/user connections
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:
Implement the equivalent of Oracle PROFILEs.
I think this would be a good project for 7.4. I'm not yet
volunteering, but if I can wrap up my current project, I might
be able to do it, depending upon the 7.4 target date. It would be:
1. A new system table:
pg_profile
2. The attributes of the profiles would be:
profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call
3. A new field would be added to pg_user/pg_shadow:
profileid
4. A 'default' profile would be created when a new database is
created with no resource limits. CREATE/ALTER user would be
modified to allow for the specification of the profile. If no
profile is provided, 'default' is assumed.
5. A new CREATE PROFILE/ALTER PROFILE/DROP PROFILE command set
would be implemented to add/update/remove the tuples in
pg_profiles. And according modification of pg_dump for
dump/reload and psql for appropriate \ command.
Example:
CREATE PROFILE clerk
IDLE_TIME 30;
ALTER USER john PROFILE clerk;
ALTER USER bob PROFILE clerk;
or, for an ISP maybe:
ALYTER PROFILE default
IDLE_TIME 30;
It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it. And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...
Mike Mascari
mascarm@mascari.com
I need others wanting this before I can add something of this
sophistication to TODO.
---------------------------------------------------------------------------
Mike Mascari wrote:
Bruce Momjian wrote:
Andrew Sullivan wrote:
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.Added to TODO:
* Allow limits on per-db/user connections
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:Implement the equivalent of Oracle PROFILEs.
I think this would be a good project for 7.4. I'm not yet
volunteering, but if I can wrap up my current project, I might
be able to do it, depending upon the 7.4 target date. It would be:1. A new system table:
pg_profile
2. The attributes of the profiles would be:
profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call3. A new field would be added to pg_user/pg_shadow:
profileid
4. A 'default' profile would be created when a new database is
created with no resource limits. CREATE/ALTER user would be
modified to allow for the specification of the profile. If no
profile is provided, 'default' is assumed.5. A new CREATE PROFILE/ALTER PROFILE/DROP PROFILE command set
would be implemented to add/update/remove the tuples in
pg_profiles. And according modification of pg_dump for
dump/reload and psql for appropriate \ command.Example:
CREATE PROFILE clerk
IDLE_TIME 30;ALTER USER john PROFILE clerk;
ALTER USER bob PROFILE clerk;or, for an ISP maybe:
ALYTER PROFILE default
IDLE_TIME 30;It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it. And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...Mike Mascari
mascarm@mascari.com
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
Mike Mascari <mascarm@mascari.com> writes:
[ extensive proposal for PROFILEs ]
It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it.
... except in the added overhead to do the resource accounting and check
to see if there is a restriction ...
And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...
I prefer GUC variables to table entries for setting stuff like recursion
limits; they're much lighter-weight to create and access, and you don't
need an initdb to add or remove a parameter.
regards, tom lane
On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
Mike Mascari <mascarm@mascari.com> writes:
[ extensive proposal for PROFILEs ]
It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it.... except in the added overhead to do the resource accounting and check
to see if there is a restriction ...
perhaps you could make a GUC variable "use_resource_profiles" that turns
the whole thing on/off.
And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...I prefer GUC variables to table entries for setting stuff like recursion
limits; they're much lighter-weight to create and access, and you don't
need an initdb to add or remove a parameter.
I don't see an adequate way to store the individual settings as GUC
variables per user...
Robert Treat
Robert Treat <xzilla@users.sourceforge.net> writes:
On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
I prefer GUC variables to table entries for setting stuff like recursion
limits; they're much lighter-weight to create and access, and you don't
need an initdb to add or remove a parameter.
I don't see an adequate way to store the individual settings as GUC
variables per user...
Have you looked at the per-database and per-user GUC facilities in 7.3?
regards, tom lane
Tom Lane wrote:
Robert Treat <xzilla@users.sourceforge.net> writes:
On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
I prefer GUC variables to table entries for setting stuff like recursion
limits; they're much lighter-weight to create and access, and you don't
need an initdb to add or remove a parameter.I don't see an adequate way to store the individual settings as GUC
variables per user...Have you looked at the per-database and per-user GUC facilities in 7.3?
Nice idea. You can now have per-user/db settings that are SET when the
connection is made. You can set any GUC variable that way. We just
need a variable that can look at all sessions and determine if that user
has exceeded their connection quota.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Friday 25 October 2002 07:03 pm, Bruce Momjian wrote:
Tom Lane wrote:
Robert Treat <xzilla@users.sourceforge.net> writes:
On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
I prefer GUC variables to table entries for setting stuff like
recursion limits; they're much lighter-weight to create and access,
and you don't need an initdb to add or remove a parameter.I don't see an adequate way to store the individual settings as GUC
variables per user...Have you looked at the per-database and per-user GUC facilities in 7.3?
Maybe I haven't looked at them enough ;-)
Nice idea. You can now have per-user/db settings that are SET when the
connection is made. You can set any GUC variable that way. We just
need a variable that can look at all sessions and determine if that user
has exceeded their connection quota.
I understand how you are able to set those variables per db, but I don't see
how you can get those settings to persist between database shutdowns.
Perhaps someone can point me to the relevant documentation?
Robert Treat
Robert Treat wrote:
Nice idea. You can now have per-user/db settings that are SET when the
connection is made. You can set any GUC variable that way. We just
need a variable that can look at all sessions and determine if that user
has exceeded their connection quota.I understand how you are able to set those variables per db, but I don't see
how you can get those settings to persist between database shutdowns.
Perhaps someone can point me to the relevant documentation?
The per db/user stuff is stored in the pg_database/pg_shadow tables per
row, so they exist in permanent storage.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Sat, Oct 26, 2002 at 01:04:55 -0400,
Bruce Momjian <pgman@candle.pha.pa.us> wrote:
The per db/user stuff is stored in the pg_database/pg_shadow tables per
row, so they exist in permanent storage.
I have a question about this. This stuff is per user OR per db right?
When I see per db/user I get the impression that users can have different
settings depending on which db they connect to. But looking at alter
database and alter user it looks like settings are per database or
per user, but there isn't a way to (in general) set something that
applies to a particular user when connecting to a particular database.
If there is a way to do that, I would be interested in a hint where to
look in the documentation.
On Saturday 26 October 2002 01:04 am, Bruce Momjian wrote:
Robert Treat wrote:
Nice idea. You can now have per-user/db settings that are SET when the
connection is made. You can set any GUC variable that way. We just
need a variable that can look at all sessions and determine if that
user has exceeded their connection quota.I understand how you are able to set those variables per db, but I don't
see how you can get those settings to persist between database shutdowns.
Perhaps someone can point me to the relevant documentation?The per db/user stuff is stored in the pg_database/pg_shadow tables per
row, so they exist in permanent storage.
Ah.. that was the missing link. I don't think the docs mention anything about
the information being stored in pg_database or pg_shadow, they always refer
to the "session default" which made me wonder what happened "between
sessions". Perhaps someone (me?) should work up a patch to help clarify
this....
Robert Treat
Bruno Wolff III wrote:
On Sat, Oct 26, 2002 at 01:04:55 -0400,
Bruce Momjian <pgman@candle.pha.pa.us> wrote:The per db/user stuff is stored in the pg_database/pg_shadow tables per
row, so they exist in permanent storage.I have a question about this. This stuff is per user OR per db right?
When I see per db/user I get the impression that users can have different
settings depending on which db they connect to. But looking at alter
database and alter user it looks like settings are per database or
per user, but there isn't a way to (in general) set something that
applies to a particular user when connecting to a particular database.
If there is a way to do that, I would be interested in a hint where to
look in the documentation.
You are right, there isn't a per/db-user combination setting. I think
one is done before the other, so you could try to set things that way,
maybe in a plpgsql procedure. I think you could do SELECT
db_user_set(); and have that function do that sets for you.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Fri, 25 Oct 2002, Bruce Momjian wrote:
I need others wanting this before I can add something of this
sophistication to TODO.
Sounds cool to me, and would satisfy what I'm looking for, since it sounds
like the same user could have different limits depending on the database
they are/were connectin gto ...
Show quoted text
---------------------------------------------------------------------------
Mike Mascari wrote:
Bruce Momjian wrote:
Andrew Sullivan wrote:
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.Added to TODO:
* Allow limits on per-db/user connections
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:Implement the equivalent of Oracle PROFILEs.
I think this would be a good project for 7.4. I'm not yet
volunteering, but if I can wrap up my current project, I might
be able to do it, depending upon the 7.4 target date. It would be:1. A new system table:
pg_profile
2. The attributes of the profiles would be:
profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call3. A new field would be added to pg_user/pg_shadow:
profileid
4. A 'default' profile would be created when a new database is
created with no resource limits. CREATE/ALTER user would be
modified to allow for the specification of the profile. If no
profile is provided, 'default' is assumed.5. A new CREATE PROFILE/ALTER PROFILE/DROP PROFILE command set
would be implemented to add/update/remove the tuples in
pg_profiles. And according modification of pg_dump for
dump/reload and psql for appropriate \ command.Example:
CREATE PROFILE clerk
IDLE_TIME 30;ALTER USER john PROFILE clerk;
ALTER USER bob PROFILE clerk;or, for an ISP maybe:
ALYTER PROFILE default
IDLE_TIME 30;It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it. And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...Mike Mascari
mascarm@mascari.com-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Tom's idea of doing this as part of the already added per-user/per-db
settings seems like a good direction to take.
---------------------------------------------------------------------------
Marc G. Fournier wrote:
On Fri, 25 Oct 2002, Bruce Momjian wrote:
I need others wanting this before I can add something of this
sophistication to TODO.Sounds cool to me, and would satisfy what I'm looking for, since it sounds
like the same user could have different limits depending on the database
they are/were connectin gto ...---------------------------------------------------------------------------
Mike Mascari wrote:
Bruce Momjian wrote:
Andrew Sullivan wrote:
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.Added to TODO:
* Allow limits on per-db/user connections
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:Implement the equivalent of Oracle PROFILEs.
I think this would be a good project for 7.4. I'm not yet
volunteering, but if I can wrap up my current project, I might
be able to do it, depending upon the 7.4 target date. It would be:1. A new system table:
pg_profile
2. The attributes of the profiles would be:
profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call3. A new field would be added to pg_user/pg_shadow:
profileid
4. A 'default' profile would be created when a new database is
created with no resource limits. CREATE/ALTER user would be
modified to allow for the specification of the profile. If no
profile is provided, 'default' is assumed.5. A new CREATE PROFILE/ALTER PROFILE/DROP PROFILE command set
would be implemented to add/update/remove the tuples in
pg_profiles. And according modification of pg_dump for
dump/reload and psql for appropriate \ command.Example:
CREATE PROFILE clerk
IDLE_TIME 30;ALTER USER john PROFILE clerk;
ALTER USER bob PROFILE clerk;or, for an ISP maybe:
ALYTER PROFILE default
IDLE_TIME 30;It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it. And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...Mike Mascari
mascarm@mascari.com-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Sat, Oct 26, 2002 at 11:12:01PM -0400, Bruce Momjian wrote:
Tom's idea of doing this as part of the already added per-user/per-db
settings seems like a good direction to take.
Yes, but how? The current implementation has a setting column in
pg_shadow, and another in pg_database. Is the idea to create a separate
pg_settings table or something like that?
--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.
That's because in Europe they call me by name, and in the US by value!"
Alvaro Herrera wrote:
On Sat, Oct 26, 2002 at 11:12:01PM -0400, Bruce Momjian wrote:
Tom's idea of doing this as part of the already added per-user/per-db
settings seems like a good direction to take.Yes, but how? The current implementation has a setting column in
pg_shadow, and another in pg_database. Is the idea to create a separate
pg_settings table or something like that?
Are you asking how to do per-db-user combination settings, like User A
can have only two connections to database B? Seems we should get
per-user and per-db settings working first and see if anyone needs
combination settings.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Sat, Oct 26, 2002 at 11:58:35PM -0400, Bruce Momjian wrote:
Alvaro Herrera wrote:
On Sat, Oct 26, 2002 at 11:12:01PM -0400, Bruce Momjian wrote:
Tom's idea of doing this as part of the already added per-user/per-db
settings seems like a good direction to take.Yes, but how? The current implementation has a setting column in
pg_shadow, and another in pg_database. Is the idea to create a separate
pg_settings table or something like that?Are you asking how to do per-db-user combination settings, like User A
can have only two connections to database B? Seems we should get
per-user and per-db settings working first and see if anyone needs
combination settings.
Well, now that I think about it... the "user local to database" (those
with the @dbname appended and all that) idea may very well cover the
ground for this. But I'm not actually asking, because I don't need it.
--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Aprender sin pensar es inutil; pensar sin aprender, peligroso" (Confucio)
On Fri, Oct 25, 2002 at 03:31:22PM -0400, Mike Mascari wrote:
Bruce Momjian wrote:
Andrew Sullivan wrote:
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
So? If it hits the installation-wide limit, you'll have the same
problem; and at that point the (presumably runaway) app would have
sucked up all the connections, denying service to other apps using other
databases. I think Marc's point here is to limit his exposure to
misbehavior of any one client app, in a database server that is serving
multiple clients using multiple databases.That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.Added to TODO:
* Allow limits on per-db/user connections
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:Implement the equivalent of Oracle PROFILEs.
Yes! Please.... it's better than all discussions about some ugly
variables. The PROFILE is better extendable and it's user
specific and in the system with ROLEs it really cool and simple
set user's system options.
I talked about it more times, but is still ignore :-) I don't want
to maintain my databases by SET command.
profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call
... and a lot of others things in future.
Karel
--
Karel Zak <zakkr@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
Karel Zak wrote:
On Fri, Oct 25, 2002 at 03:31:22PM -0400, Mike Mascari wrote:
Bruce Momjian wrote:
Added to TODO:
* Allow limits on per-db/user connections
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:Implement the equivalent of Oracle PROFILEs.
Yes! Please.... it's better than all discussions about some ugly
variables. The PROFILE is better extendable and it's user
specific and in the system with ROLEs it really cool and simple
set user's system options.I talked about it more times, but is still ignore :-) I don't want
to maintain my databases by SET command.
It seems we are in the minority. :-(
Mike Mascari
mascarm@mascari.com