CONNECTION LIMIT and Parallel Query don't play well together

Started by David Rowleyabout 9 years ago17 messages
#1David Rowley
david.rowley@2ndquadrant.com

It has come to my attention that when a user has a CONNECTION LIMIT
set, and they make use of parallel query, that their queries can fail
due to the connection limit being exceeded.

Simple test case:

postgres=# CREATE USER user1 LOGIN CONNECTION LIMIT 2;
CREATE ROLE
postgres=# \c postgres user1
You are now connected TO DATABASE "postgres" AS USER "user1".
postgres=> CREATE TABLE t1 AS (SELECT i FROM GENERATE_SERIES(1,6000000) s(i));
SELECT 6000000
postgres=> SET max_parallel_workers_per_gather = 2;
SET
postgres=> SELECT COUNT(*) FROM t1;
ERROR: too many connections FOR ROLE "user1"
CONTEXT: parallel worker

Now, as I understand it, during the design of parallel query, it was
designed in such a way that nodeGather could perform all of the work
in the main process in the event that no workers were available, and
that the only user visible evidence of this would be the query would
be slower than it would otherwise be.

After a little bit of looking around I see that CountUserBackends()
does not ignore the parallel workers, and counts these as
"CONNECTIONS". It's probably debatable to weather these are
connections or not, but I do see that max_connections is separate from
max_worker_processes, per:

/* the extra unit accounts for the autovacuum launcher */
MaxBackends = MaxConnections + autovacuum_max_workers + 1 +
max_worker_processes;

so the two don't stomp on each other's feet, which makes me think that
a parallel worker should not consume a user connection, since it's not
eating into max_connections. Also this is convenient fix for this
would be to have CountUserBackends() ignore parallel workers
completely.

The alternatives I've thought of are would be to make some additional
checks in RegisterDynamicBackgroundWorker() to make sure we don't get
more workers than the user would be allowed, but that would add more
code between the lock and increase contention, and we'd also somehow
need to find a way to reserve the connections until the parallel
workers started, so they were not taken by another concurrent
connection in the meantime. This all sounds pretty horrid.

Perhaps we can provide greater control of parallel workers per user in
a future release to allow admins who are concerned about users hogging
all of the parallel workers. Yet that's likely premature, as we don't
have a per query nob for that yet.

Thoughts?

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#2Amit Kapila
amit.kapila16@gmail.com
In reply to: David Rowley (#1)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On Wed, Jan 11, 2017 at 2:44 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:

It has come to my attention that when a user has a CONNECTION LIMIT
set, and they make use of parallel query, that their queries can fail
due to the connection limit being exceeded.

Simple test case:

postgres=# CREATE USER user1 LOGIN CONNECTION LIMIT 2;
CREATE ROLE
postgres=# \c postgres user1
You are now connected TO DATABASE "postgres" AS USER "user1".
postgres=> CREATE TABLE t1 AS (SELECT i FROM GENERATE_SERIES(1,6000000) s(i));
SELECT 6000000
postgres=> SET max_parallel_workers_per_gather = 2;
SET
postgres=> SELECT COUNT(*) FROM t1;
ERROR: too many connections FOR ROLE "user1"
CONTEXT: parallel worker

Now, as I understand it, during the design of parallel query, it was
designed in such a way that nodeGather could perform all of the work
in the main process in the event that no workers were available, and
that the only user visible evidence of this would be the query would
be slower than it would otherwise be.

This has been reported previously [1]/messages/by-id/20161222111345.25620.8603@wrigleys.postgresql.org and I have explained the reason
why such a behaviour is possible and why this can't be handled in
Gather node.

After a little bit of looking around I see that CountUserBackends()
does not ignore the parallel workers, and counts these as
"CONNECTIONS". It's probably debatable to weather these are
connections or not,

I think this is not only for parallel workers, rather any background
worker that uses database connection
(BGWORKER_BACKEND_DATABASE_CONNECTION) will be counted in a similar
way. I am not sure if it is worth inventing something to consider
such background worker connections different from backend connections.
However, I think we should document it either in parallel query or in
background worker or in Create User .. Connection section.

[1]: /messages/by-id/20161222111345.25620.8603@wrigleys.postgresql.org

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Albe Laurenz
laurenz.albe@wien.gv.at
In reply to: Amit Kapila (#2)
Re: CONNECTION LIMIT and Parallel Query don't play well together

Amit Kapila wrote:

On Wed, Jan 11, 2017 at 2:44 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:

It has come to my attention that when a user has a CONNECTION LIMIT
set, and they make use of parallel query, that their queries can fail
due to the connection limit being exceeded.

Simple test case:

postgres=# CREATE USER user1 LOGIN CONNECTION LIMIT 2;
[...]
postgres=> SELECT COUNT(*) FROM t1;
ERROR: too many connections FOR ROLE "user1"
CONTEXT: parallel worker

Now, as I understand it, during the design of parallel query, it was
designed in such a way that nodeGather could perform all of the work
in the main process in the event that no workers were available, and
that the only user visible evidence of this would be the query would
be slower than it would otherwise be.

After a little bit of looking around I see that CountUserBackends()
does not ignore the parallel workers, and counts these as
"CONNECTIONS". It's probably debatable to weather these are
connections or not,

I think this is not only for parallel workers, rather any background
worker that uses database connection
(BGWORKER_BACKEND_DATABASE_CONNECTION) will be counted in a similar
way. I am not sure if it is worth inventing something to consider
such background worker connections different from backend connections.
However, I think we should document it either in parallel query or in
background worker or in Create User .. Connection section.

I think that this should be fixed rather than documented.
Users will not take it well if their queries error out
in this fashion.

Background processes should not be counted as active connections.
Their limit should be determined by max_worker_processes,
and neither max_connections nor the connection limit per user
or database should take them into account.

Yours,
Laurenz Albe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Robert Haas
robertmhaas@gmail.com
In reply to: David Rowley (#1)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On Tue, Jan 10, 2017 at 4:14 PM, David Rowley
<david.rowley@2ndquadrant.com> wrote:

It has come to my attention that when a user has a CONNECTION LIMIT
set, and they make use of parallel query, that their queries can fail
due to the connection limit being exceeded.

That's bad.

Now, as I understand it, during the design of parallel query, it was
designed in such a way that nodeGather could perform all of the work
in the main process in the event that no workers were available, and
that the only user visible evidence of this would be the query would
be slower than it would otherwise be.

That was the intent.

After a little bit of looking around I see that CountUserBackends()
does not ignore the parallel workers, and counts these as
"CONNECTIONS". It's probably debatable to weather these are
connections or not, ...

Yeah. I think that I looked at the connection limit stuff in the 9.4
time frame and said, well, we shouldn't let people use background
workers as a way of evading the connection limit, so let's continue to
count them against that limit. Then, later on, I did the work to try
to make it transparent when sufficient parallel workers cannot be
obtained, but forgot about this case or somehow convinced myself that
it didn't matter.

One option is certainly to decide categorically that background
workers are not connections, and therefore CountUserBackends() should
ignore them and InitializeSessionUserId() shouldn't call it when the
session being started is a background worker. That means that
background workers don't count against the user connection limit, full
stop. Another option, probably slightly less easy to implement, is to
decide that background workers in general count against the limit but
parallel workers do not. The third option is to count both background
workers and parallel workers against the limit but somehow recover
gracefully when this error trips. But I have no idea how we could
implement that third option in a reasonable way.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5David Rowley
david.rowley@2ndquadrant.com
In reply to: Robert Haas (#4)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 12 January 2017 at 09:36, Robert Haas <robertmhaas@gmail.com> wrote:

One option is certainly to decide categorically that background
workers are not connections, and therefore CountUserBackends() should
ignore them and InitializeSessionUserId() shouldn't call it when the
session being started is a background worker. That means that
background workers don't count against the user connection limit, full
stop. Another option, probably slightly less easy to implement, is to
decide that background workers in general count against the limit but
parallel workers do not.

I think the root question here which we need to ask ourselves is, what
is "CONNECTION LIMIT" for. I seem to have come around to assuming it's
meant to be to protect the server to give everyone a fairer chance of
getting a connection to the database. Now, since background workers
don't consume anything from max_connections, then I don't really feel
that a background worker should count towards "CONNECTION LIMIT". I'd
assume any CONNECTION LIMITs that are set for a user would be
calculated based on what max_connections is set to. If we want to
limit background workers in the same manner, then perhaps we'd want to
invent something like "WORKER LIMIT N" in CREATE USER.

The third option is to count both background
workers and parallel workers against the limit but somehow recover
gracefully when this error trips. But I have no idea how we could
implement that third option in a reasonable way.

I agree with your view on the third option. I looked at this too and
it seems pretty horrible to try and do anything in that direction. It
seems that even if we suppressed the ERROR message, and had the worker
exit, that we'd still briefly consume a background worker slot which
would reduce the chances of some entitle user connection obtaining
them, in fact, this is the case as it stands today, even if that
moment is brief.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6David Rowley
david.rowley@2ndquadrant.com
In reply to: David Rowley (#5)
1 attachment(s)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 12 January 2017 at 09:36, Robert Haas <robertmhaas@gmail.com> wrote:

One option is certainly to decide categorically that background
workers are not connections, and therefore CountUserBackends() should
ignore them and InitializeSessionUserId() shouldn't call it when the
session being started is a background worker. That means that
background workers don't count against the user connection limit, full
stop.

I've attached a patch which intended to assist discussions on this topic.

The patch adds some notes to the docs to mention that background
workers and prepared xacts are not counted in CONNECTION LIMIT, it
then goes on and makes CountUserBackends() ignore bgworkers. It was
already ignoring prepared xacts. There's a bit of plumbing work to
make the proc array aware of the background worker status. Hopefully
this is suitable. I'm not all that close to that particular area of
the code.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

connection_limit_ignore_bgworkers_v1.patchapplication/octet-stream; name=connection_limit_ignore_bgworkers_v1.patchDownload
diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml
index a3b8ed9..1d5743c 100644
--- a/doc/src/sgml/ref/create_role.sgml
+++ b/doc/src/sgml/ref/create_role.sgml
@@ -198,7 +198,10 @@ CREATE ROLE <replaceable class="PARAMETER">name</replaceable> [ [ WITH ] <replac
       <listitem>
        <para>
         If role can log in, this specifies how many concurrent connections
-        the role can make.  -1 (the default) means no limit.
+        the role can make.  -1 (the default) means no limit. Note that only
+        normal connection type are counted towards this limit. Neither prepared
+        transactions or background worker connections are counted towards this
+        limit.
        </para>
       </listitem>
      </varlistentry>
diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c
index 5b72c1d..6fde2bd 100644
--- a/src/backend/access/transam/twophase.c
+++ b/src/backend/access/transam/twophase.c
@@ -420,6 +420,7 @@ MarkAsPreparing(TransactionId xid, const char *gid,
 	proc->backendId = InvalidBackendId;
 	proc->databaseId = databaseid;
 	proc->roleId = owner;
+	proc->isBackgroundWorker = false;
 	proc->lwWaiting = false;
 	proc->lwWaitMode = 0;
 	proc->waitLock = NULL;
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index 3f47b98..f2ee1a9 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -2803,6 +2803,8 @@ CountUserBackends(Oid roleid)
 
 		if (proc->pid == 0)
 			continue;			/* do not count prepared xacts */
+		if (proc->isBackgroundWorker)
+			continue;			/* do not count background workers */
 		if (proc->roleId == roleid)
 			count++;
 	}
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 1b836f7..8f467be 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -370,6 +370,7 @@ InitProcess(void)
 	MyProc->backendId = InvalidBackendId;
 	MyProc->databaseId = InvalidOid;
 	MyProc->roleId = InvalidOid;
+	MyProc->isBackgroundWorker = IsBackgroundWorker;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
@@ -542,6 +543,7 @@ InitAuxiliaryProcess(void)
 	MyProc->backendId = InvalidBackendId;
 	MyProc->databaseId = InvalidOid;
 	MyProc->roleId = InvalidOid;
+	MyProc->isBackgroundWorker = IsBackgroundWorker;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	MyProc->lwWaiting = false;
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 398fa8a..5f38fa6 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -103,6 +103,8 @@ struct PGPROC
 	Oid			databaseId;		/* OID of database this backend is using */
 	Oid			roleId;			/* OID of role using this backend */
 
+	bool		isBackgroundWorker; /* true if background worker. */
+
 	/*
 	 * While in hot standby mode, shows that a conflict signal has been sent
 	 * for the current transaction. Set/cleared while holding ProcArrayLock,
#7David Rowley
david.rowley@2ndquadrant.com
In reply to: David Rowley (#6)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 12 January 2017 at 15:24, David Rowley <david.rowley@2ndquadrant.com> wrote:

I've attached a patch which intended to assist discussions on this topic.

The patch adds some notes to the docs to mention that background
workers and prepared xacts are not counted in CONNECTION LIMIT, it
then goes on and makes CountUserBackends() ignore bgworkers. It was
already ignoring prepared xacts. There's a bit of plumbing work to
make the proc array aware of the background worker status. Hopefully
this is suitable. I'm not all that close to that particular area of
the code.

Hi Robert,

Wondering you've had any time to glance over this?

If you think the patch needs more work, or goes about things the wrong
way, let me know, and I'll make the changes.

Thanks

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Robert Haas
robertmhaas@gmail.com
In reply to: David Rowley (#7)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On Thu, Jan 26, 2017 at 7:59 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:

On 12 January 2017 at 15:24, David Rowley <david.rowley@2ndquadrant.com> wrote:

I've attached a patch which intended to assist discussions on this topic.

The patch adds some notes to the docs to mention that background
workers and prepared xacts are not counted in CONNECTION LIMIT, it
then goes on and makes CountUserBackends() ignore bgworkers. It was
already ignoring prepared xacts. There's a bit of plumbing work to
make the proc array aware of the background worker status. Hopefully
this is suitable. I'm not all that close to that particular area of
the code.

Wondering you've had any time to glance over this?

If you think the patch needs more work, or goes about things the wrong
way, let me know, and I'll make the changes.

Sorry, this had slipped through the cracks -- I'm having a very hard
time keeping up with the flow of patches and emails. But it looks
good to me, except that it seems like CountDBBackends() needs the same
fix (and probably a corresponding documentation update).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9David Rowley
david.rowley@2ndquadrant.com
In reply to: Robert Haas (#8)
1 attachment(s)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 27 January 2017 at 03:53, Robert Haas <robertmhaas@gmail.com> wrote:

Sorry, this had slipped through the cracks -- I'm having a very hard
time keeping up with the flow of patches and emails. But it looks
good to me, except that it seems like CountDBBackends() needs the same
fix (and probably a corresponding documentation update).

Thanks for looking at this.

Looks like there's a few other usages of CountDBBackends() which
require background workers to be counted too, so I ended up creating
CountDBConnections() as I didn't really think adding a bool flag to
CountDBBackends was so nice.

I thought about renaming CountUserBackends() to become
CountUserConnections(), but I've not. Although, perhaps its better to
break any third party stuff that uses that so that authors can review
which behaviour they need rather than have their extension silently
break?

David

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

connection_limit_ignore_bgworkers_v2.patchapplication/octet-stream; name=connection_limit_ignore_bgworkers_v2.patchDownload
diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml
index cf33746..cf0d53b 100644
--- a/doc/src/sgml/ref/create_database.sgml
+++ b/doc/src/sgml/ref/create_database.sgml
@@ -258,7 +258,8 @@ CREATE DATABASE <replaceable class="PARAMETER">name</replaceable>
    The <literal>CONNECTION LIMIT</> option is only enforced approximately;
    if two new sessions start at about the same time when just one
    connection <quote>slot</> remains for the database, it is possible that
-   both will fail.  Also, the limit is not enforced against superusers.
+   both will fail.  Also, the limit is not enforced against superusers or
+   background worker processes.
   </para>
  </refsect1>
 
diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml
index a3b8ed9..2ae576e 100644
--- a/doc/src/sgml/ref/create_role.sgml
+++ b/doc/src/sgml/ref/create_role.sgml
@@ -198,7 +198,10 @@ CREATE ROLE <replaceable class="PARAMETER">name</replaceable> [ [ WITH ] <replac
       <listitem>
        <para>
         If role can log in, this specifies how many concurrent connections
-        the role can make.  -1 (the default) means no limit.
+        the role can make.  -1 (the default) means no limit. Note that only
+        normal connections are counted towards this limit. Neither prepared
+        transactions nor background worker connections are counted towards
+        this limit.
        </para>
       </listitem>
      </varlistentry>
diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c
index 5b72c1d..6fde2bd 100644
--- a/src/backend/access/transam/twophase.c
+++ b/src/backend/access/transam/twophase.c
@@ -420,6 +420,7 @@ MarkAsPreparing(TransactionId xid, const char *gid,
 	proc->backendId = InvalidBackendId;
 	proc->databaseId = databaseid;
 	proc->roleId = owner;
+	proc->isBackgroundWorker = false;
 	proc->lwWaiting = false;
 	proc->lwWaitMode = 0;
 	proc->waitLock = NULL;
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index 3f47b98..cd14667 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -2745,6 +2745,38 @@ CountDBBackends(Oid databaseid)
 }
 
 /*
+ * CountDBConnections --- counts database backends ignoring any background
+ *		worker processes
+ */
+int
+CountDBConnections(Oid databaseid)
+{
+	ProcArrayStruct *arrayP = procArray;
+	int			count = 0;
+	int			index;
+
+	LWLockAcquire(ProcArrayLock, LW_SHARED);
+
+	for (index = 0; index < arrayP->numProcs; index++)
+	{
+		int			pgprocno = arrayP->pgprocnos[index];
+		volatile PGPROC *proc = &allProcs[pgprocno];
+
+		if (proc->pid == 0)
+			continue;			/* do not count prepared xacts */
+		if (proc->isBackgroundWorker)
+			continue;			/* do not count background workers */
+		if (!OidIsValid(databaseid) ||
+			proc->databaseId == databaseid)
+			count++;
+	}
+
+	LWLockRelease(ProcArrayLock);
+
+	return count;
+}
+
+/*
  * CancelDBBackends --- cancel backends that are using specified database
  */
 void
@@ -2803,6 +2835,8 @@ CountUserBackends(Oid roleid)
 
 		if (proc->pid == 0)
 			continue;			/* do not count prepared xacts */
+		if (proc->isBackgroundWorker)
+			continue;			/* do not count background workers */
 		if (proc->roleId == roleid)
 			count++;
 	}
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 1b836f7..8f467be 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -370,6 +370,7 @@ InitProcess(void)
 	MyProc->backendId = InvalidBackendId;
 	MyProc->databaseId = InvalidOid;
 	MyProc->roleId = InvalidOid;
+	MyProc->isBackgroundWorker = IsBackgroundWorker;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
@@ -542,6 +543,7 @@ InitAuxiliaryProcess(void)
 	MyProc->backendId = InvalidBackendId;
 	MyProc->databaseId = InvalidOid;
 	MyProc->roleId = InvalidOid;
+	MyProc->isBackgroundWorker = IsBackgroundWorker;
 	MyPgXact->delayChkpt = false;
 	MyPgXact->vacuumFlags = 0;
 	MyProc->lwWaiting = false;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 21fdc6d..4d0a2a7 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -350,7 +350,7 @@ CheckMyDatabase(const char *name, bool am_superuser)
 		 */
 		if (dbform->datconnlimit >= 0 &&
 			!am_superuser &&
-			CountDBBackends(MyDatabaseId) > dbform->datconnlimit)
+			CountDBConnections(MyDatabaseId) > dbform->datconnlimit)
 			ereport(FATAL,
 					(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
 					 errmsg("too many connections for database \"%s\"",
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 398fa8a..5f38fa6 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -103,6 +103,8 @@ struct PGPROC
 	Oid			databaseId;		/* OID of database this backend is using */
 	Oid			roleId;			/* OID of role using this backend */
 
+	bool		isBackgroundWorker; /* true if background worker. */
+
 	/*
 	 * While in hot standby mode, shows that a conflict signal has been sent
 	 * for the current transaction. Set/cleared while holding ProcArrayLock,
diff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h
index 0d5027f..9d5a13e 100644
--- a/src/include/storage/procarray.h
+++ b/src/include/storage/procarray.h
@@ -73,6 +73,7 @@ extern pid_t CancelVirtualTransaction(VirtualTransactionId vxid, ProcSignalReaso
 
 extern bool MinimumActiveBackends(int min);
 extern int	CountDBBackends(Oid databaseid);
+extern int	CountDBConnections(Oid databaseid);
 extern void CancelDBBackends(Oid databaseid, ProcSignalReason sigmode, bool conflictPending);
 extern int	CountUserBackends(Oid roleid);
 extern bool CountOtherDBBackends(Oid databaseId,
#10Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: David Rowley (#9)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 01/29/2017 04:07 PM, David Rowley wrote:

On 27 January 2017 at 03:53, Robert Haas <robertmhaas@gmail.com> wrote:

Sorry, this had slipped through the cracks -- I'm having a very hard
time keeping up with the flow of patches and emails. But it looks
good to me, except that it seems like CountDBBackends() needs the same
fix (and probably a corresponding documentation update).

Thanks for looking at this.

Looks like there's a few other usages of CountDBBackends() which
require background workers to be counted too, so I ended up creating
CountDBConnections() as I didn't really think adding a bool flag to
CountDBBackends was so nice.

I thought about renaming CountUserBackends() to become
CountUserConnections(), but I've not. Although, perhaps its better to
break any third party stuff that uses that so that authors can review
which behaviour they need rather than have their extension silently
break?

I'm inclined to keep this as is - I don't think we should change the
names at least in the stable releases. I'm not sure how far back it
should be patched. The real effect is going to be felt from 9.6, I
think, but arguably for consistency we should change it back to 9.3 or
9.4. Thoughts?

Other things being equal I intend to commit this later today.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Albe Laurenz
laurenz.albe@wien.gv.at
In reply to: Andrew Dunstan (#10)
Re: CONNECTION LIMIT and Parallel Query don't play well together

Andrew Dunstan wrote:

On 01/29/2017 04:07 PM, David Rowley wrote:

Looks like there's a few other usages of CountDBBackends() which
require background workers to be counted too, so I ended up creating
CountDBConnections() as I didn't really think adding a bool flag to
CountDBBackends was so nice.

I thought about renaming CountUserBackends() to become
CountUserConnections(), but I've not. Although, perhaps its better to
break any third party stuff that uses that so that authors can review
which behaviour they need rather than have their extension silently
break?

I'm inclined to keep this as is - I don't think we should change the
names at least in the stable releases. I'm not sure how far back it
should be patched. The real effect is going to be felt from 9.6, I
think, but arguably for consistency we should change it back to 9.3 or
9.4. Thoughts?

Other things being equal I intend to commit this later today.

+1

Maybe it is better not to backpatch farther than 9.6 - I think it is
good to be conservative about backpatching, and, as you say, the effect
won't be noticable much before 9.6.

Yours,
Laurenz Albe

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: David Rowley (#5)
1 attachment(s)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 1/11/17 5:51 PM, David Rowley wrote:

Now, since background workers
don't consume anything from max_connections, then I don't really feel
that a background worker should count towards "CONNECTION LIMIT". I'd
assume any CONNECTION LIMITs that are set for a user would be
calculated based on what max_connections is set to. If we want to
limit background workers in the same manner, then perhaps we'd want to
invent something like "WORKER LIMIT N" in CREATE USER.

This explanation makes sense, but it kind of upset my background
sessions patch, which would previously have been limited by per-user
connection settings.

So I would like to have a background worker limit per user, as you
allude to. Attached is a patch that implements a GUC setting
max_worker_processes_per_user.

Besides the uses for background sessions, but it can also be useful for
parallel workers, logical replication apply workers, or things like
third-party partitioning extensions.

Thoughts?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-Add-max_worker_processes_per_user-setting.patchtext/x-patch; name=0001-Add-max_worker_processes_per_user-setting.patchDownload
From eca26d5858fa0750427b15f4439d8936daff4d8b Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter_e@gmx.net>
Date: Tue, 14 Feb 2017 22:20:02 -0500
Subject: [PATCH] Add max_worker_processes_per_user setting

---
 doc/src/sgml/config.sgml          | 22 ++++++++++++++++++++++
 doc/src/sgml/ref/create_role.sgml |  3 ++-
 src/backend/postmaster/bgworker.c | 28 ++++++++++++++++++++++++++++
 src/backend/utils/init/globals.c  |  1 +
 src/backend/utils/misc/guc.c      | 12 ++++++++++++
 src/include/miscadmin.h           |  1 +
 6 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index dc63d7d5e4..ba74556444 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2013,6 +2013,28 @@ <title>Asynchronous Behavior</title>
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-max-worker-processes-per-user" xreflabel="max_worker_processes_per_user">
+       <term><varname>max_worker_processes_per_user</varname> (<type>integer</type>)
+       <indexterm>
+        <primary><varname>max_worker_processes_per_user</> configuration parameter</primary>
+       </indexterm>
+       </term>
+       <listitem>
+        <para>
+         Sets the maximum number of background processes allowed per user.  If
+         the setting is -1, then there is no limit.  That is also the default.
+        </para>
+
+        <para>
+         Only superusers can change this setting.
+         Unlike <varname>max_worker_processes</varname>, which controls the
+         overall instance limit, this setting can also be changed at run time
+         and can be set differently for different users by
+         using <literal>ALTER ROLE ... SET</literal>.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="guc-max-parallel-workers-per-gather" xreflabel="max_parallel_workers_per_gather">
        <term><varname>max_parallel_workers_per_gather</varname> (<type>integer</type>)
        <indexterm>
diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml
index 2ae576ede6..4d0b8127f9 100644
--- a/doc/src/sgml/ref/create_role.sgml
+++ b/doc/src/sgml/ref/create_role.sgml
@@ -201,7 +201,8 @@ <title>Parameters</title>
         the role can make.  -1 (the default) means no limit. Note that only
         normal connections are counted towards this limit. Neither prepared
         transactions nor background worker connections are counted towards
-        this limit.
+        this limit (see <xref linkend="guc-max-worker-processes-per-user">
+        for the latter).
        </para>
       </listitem>
      </varlistentry>
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index cd99b0b392..f1045ddcd2 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -77,6 +77,7 @@ typedef struct BackgroundWorkerSlot
 	bool		in_use;
 	bool		terminate;
 	pid_t		pid;			/* InvalidPid = not started yet; 0 = dead */
+	Oid			roleid;			/* user responsible for it */
 	uint64		generation;		/* incremented when slot is recycled */
 	BackgroundWorker worker;
 } BackgroundWorkerSlot;
@@ -905,6 +906,32 @@ RegisterDynamicBackgroundWorker(BackgroundWorker *worker,
 	}
 
 	/*
+	 * Check number of used slots for user
+	 */
+	if (max_worker_processes_per_user >= 0)
+	{
+		int		count = 0;
+
+		for (slotno = 0; slotno < BackgroundWorkerData->total_slots; ++slotno)
+		{
+			BackgroundWorkerSlot *slot = &BackgroundWorkerData->slot[slotno];
+
+			if (slot->in_use && slot->roleid == GetUserId())
+				count++;
+		}
+
+		if (count > max_worker_processes_per_user)
+		{
+			ereport(LOG,
+					(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
+					 errmsg("too many worker processes for role \"%s\"",
+							GetUserNameFromId(GetUserId(), false))));
+			LWLockRelease(BackgroundWorkerLock);
+			return false;
+		}
+	}
+
+	/*
 	 * Look for an unused slot.  If we find one, grab it.
 	 */
 	for (slotno = 0; slotno < BackgroundWorkerData->total_slots; ++slotno)
@@ -915,6 +942,7 @@ RegisterDynamicBackgroundWorker(BackgroundWorker *worker,
 		{
 			memcpy(&slot->worker, worker, sizeof(BackgroundWorker));
 			slot->pid = InvalidPid;		/* indicates not started yet */
+			slot->roleid = GetUserId();
 			slot->generation++;
 			slot->terminate = false;
 			generation = slot->generation;
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030a64..f92c8c196f 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -122,6 +122,7 @@ int			replacement_sort_tuples = 150000;
 int			NBuffers = 1000;
 int			MaxConnections = 90;
 int			max_worker_processes = 8;
+int			max_worker_processes_per_user = -1;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
 
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 0249721204..bbae5dc9f8 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -2480,6 +2480,18 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_worker_processes_per_user",
+			PGC_SUSET,
+			RESOURCES_ASYNCHRONOUS,
+			gettext_noop("Maximum number of concurrent worker processes per user."),
+			NULL,
+		},
+		&max_worker_processes_per_user,
+		-1, -1, MAX_BACKENDS,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"max_logical_replication_workers",
 			PGC_POSTMASTER,
 			RESOURCES_ASYNCHRONOUS,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b299c..1c96f8876c 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,7 @@ extern PGDLLIMPORT int NBuffers;
 extern int	MaxBackends;
 extern int	MaxConnections;
 extern int	max_worker_processes;
+extern int	max_worker_processes_per_user;
 extern int	max_parallel_workers;
 
 extern PGDLLIMPORT int MyProcPid;
-- 
2.11.1

#13Robert Haas
robertmhaas@gmail.com
In reply to: Peter Eisentraut (#12)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On Wed, Feb 15, 2017 at 11:19 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

On 1/11/17 5:51 PM, David Rowley wrote:

Now, since background workers
don't consume anything from max_connections, then I don't really feel
that a background worker should count towards "CONNECTION LIMIT". I'd
assume any CONNECTION LIMITs that are set for a user would be
calculated based on what max_connections is set to. If we want to
limit background workers in the same manner, then perhaps we'd want to
invent something like "WORKER LIMIT N" in CREATE USER.

This explanation makes sense, but it kind of upset my background
sessions patch, which would previously have been limited by per-user
connection settings.

So I would like to have a background worker limit per user, as you
allude to. Attached is a patch that implements a GUC setting
max_worker_processes_per_user.

Besides the uses for background sessions, but it can also be useful for
parallel workers, logical replication apply workers, or things like
third-party partitioning extensions.

Thoughts?

This isn't going to deliver consistent results if it's set differently
in different sessions, although maybe you could weasel around that by
wording the documentation in just the right way. It seems OK
otherwise.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Peter Eisentraut (#12)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 2/15/17 11:19, Peter Eisentraut wrote:

So I would like to have a background worker limit per user, as you
allude to. Attached is a patch that implements a GUC setting
max_worker_processes_per_user.

Besides the uses for background sessions, but it can also be useful for
parallel workers, logical replication apply workers, or things like
third-party partitioning extensions.

Given that background sessions have been postponed, is there still
interest in this separate from that? It would be useful for per-user
parallel worker limits, for example.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Peter Eisentraut (#14)
1 attachment(s)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 4/6/17 15:01, Peter Eisentraut wrote:

On 2/15/17 11:19, Peter Eisentraut wrote:

So I would like to have a background worker limit per user, as you
allude to. Attached is a patch that implements a GUC setting
max_worker_processes_per_user.

Besides the uses for background sessions, but it can also be useful for
parallel workers, logical replication apply workers, or things like
third-party partitioning extensions.

Given that background sessions have been postponed, is there still
interest in this separate from that? It would be useful for per-user
parallel worker limits, for example.

Here is a slightly updated patch for consideration in the upcoming
commit fest.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

v2-0001-Add-max_worker_processes_per_user-setting.patchtext/plain; charset=UTF-8; name=v2-0001-Add-max_worker_processes_per_user-setting.patch; x-mac-creator=0; x-mac-type=0Download
From e99d8ccf9dfd007c054b20e8d10ffb35cfb722b7 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter_e@gmx.net>
Date: Wed, 23 Aug 2017 19:10:21 -0400
Subject: [PATCH v2] Add max_worker_processes_per_user setting

---
 doc/src/sgml/config.sgml          | 28 ++++++++++++++++++++++++++++
 doc/src/sgml/ref/create_role.sgml |  3 ++-
 src/backend/postmaster/bgworker.c | 28 ++++++++++++++++++++++++++++
 src/backend/utils/init/globals.c  |  1 +
 src/backend/utils/misc/guc.c      | 12 ++++++++++++
 src/include/miscadmin.h           |  1 +
 6 files changed, 72 insertions(+), 1 deletion(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2b6255ed95..24f1d13f8a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2041,6 +2041,34 @@ <title>Asynchronous Behavior</title>
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-max-worker-processes-per-user" xreflabel="max_worker_processes_per_user">
+       <term><varname>max_worker_processes_per_user</varname> (<type>integer</type>)
+       <indexterm>
+        <primary><varname>max_worker_processes_per_user</> configuration parameter</primary>
+       </indexterm>
+       </term>
+       <listitem>
+        <para>
+         Sets the maximum number of background processes allowed per user.  If
+         the setting is -1, then there is no limit.  That is also the default.
+        </para>
+
+        <para>
+         Only superusers can change this setting.
+         Unlike <varname>max_worker_processes</varname>, which controls the
+         overall instance limit, this setting can also be changed at run time
+         and can be set differently for different users by
+         using <literal>ALTER ROLE ... SET</literal>.
+        </para>
+
+        <para>
+         This setting is checked at the time the worker process is started
+         using the setting in the session that causes it to start.  Changes to
+         the setting will not affect already running workers.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="guc-max-parallel-workers-per-gather" xreflabel="max_parallel_workers_per_gather">
        <term><varname>max_parallel_workers_per_gather</varname> (<type>integer</type>)
        <indexterm>
diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml
index 36772b678a..a7c547d907 100644
--- a/doc/src/sgml/ref/create_role.sgml
+++ b/doc/src/sgml/ref/create_role.sgml
@@ -204,7 +204,8 @@ <title>Parameters</title>
         the role can make.  -1 (the default) means no limit. Note that only
         normal connections are counted towards this limit. Neither prepared
         transactions nor background worker connections are counted towards
-        this limit.
+        this limit (see <xref linkend="guc-max-worker-processes-per-user">
+        for the latter).
        </para>
       </listitem>
      </varlistentry>
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 28af6f0f07..443dcaa4f3 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -79,6 +79,7 @@ typedef struct BackgroundWorkerSlot
 	bool		in_use;
 	bool		terminate;
 	pid_t		pid;			/* InvalidPid = not started yet; 0 = dead */
+	Oid			roleid;			/* user responsible for it */
 	uint64		generation;		/* incremented when slot is recycled */
 	BackgroundWorker worker;
 } BackgroundWorkerSlot;
@@ -976,6 +977,32 @@ RegisterDynamicBackgroundWorker(BackgroundWorker *worker,
 		return false;
 	}
 
+	/*
+	 * Check number of used slots for user
+	 */
+	if (max_worker_processes_per_user >= 0)
+	{
+		int		count = 0;
+
+		for (slotno = 0; slotno < BackgroundWorkerData->total_slots; ++slotno)
+		{
+			BackgroundWorkerSlot *slot = &BackgroundWorkerData->slot[slotno];
+
+			if (slot->in_use && slot->roleid == GetUserId())
+				count++;
+		}
+
+		if (count > max_worker_processes_per_user)
+		{
+			ereport(LOG,
+					(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
+					 errmsg("too many worker processes for role \"%s\"",
+							GetUserNameFromId(GetUserId(), false))));
+			LWLockRelease(BackgroundWorkerLock);
+			return false;
+		}
+	}
+
 	/*
 	 * Look for an unused slot.  If we find one, grab it.
 	 */
@@ -987,6 +1014,7 @@ RegisterDynamicBackgroundWorker(BackgroundWorker *worker,
 		{
 			memcpy(&slot->worker, worker, sizeof(BackgroundWorker));
 			slot->pid = InvalidPid; /* indicates not started yet */
+			slot->roleid = GetUserId();
 			slot->generation++;
 			slot->terminate = false;
 			generation = slot->generation;
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 7c09498dc0..fa1b689ae7 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -123,6 +123,7 @@ int			replacement_sort_tuples = 150000;
 int			NBuffers = 1000;
 int			MaxConnections = 90;
 int			max_worker_processes = 8;
+int			max_worker_processes_per_user = -1;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
 
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 246fea8693..23dc33bd61 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -2506,6 +2506,18 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_worker_processes, NULL, NULL
 	},
 
+	{
+		{"max_worker_processes_per_user",
+			PGC_SUSET,
+			RESOURCES_ASYNCHRONOUS,
+			gettext_noop("Maximum number of concurrent worker processes per user."),
+			NULL,
+		},
+		&max_worker_processes_per_user,
+		-1, -1, MAX_BACKENDS,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"max_logical_replication_workers",
 			PGC_POSTMASTER,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index dad98de98d..a9f21cc34b 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -158,6 +158,7 @@ extern PGDLLIMPORT int NBuffers;
 extern int	MaxBackends;
 extern int	MaxConnections;
 extern int	max_worker_processes;
+extern int	max_worker_processes_per_user;
 extern int	max_parallel_workers;
 
 extern PGDLLIMPORT int MyProcPid;
-- 
2.14.1

#16David Rowley
david.rowley@2ndquadrant.com
In reply to: Peter Eisentraut (#15)
Re: CONNECTION LIMIT and Parallel Query don't play well together

On 24 August 2017 at 11:15, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

Here is a slightly updated patch for consideration in the upcoming
commit fest.

Hi Peter,

I just had a quick glance over this and wondered about 2 things.

1. Why a GUC and not a new per user option so it can be configured
differently for different users? Something like ALTER USER ... WORKER
LIMIT <n>; perhaps. I mentioned about this up-thread a bit.

2.

+ if (count > max_worker_processes_per_user)
+ {
+ ereport(LOG,
+ (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
+ errmsg("too many worker processes for role \"%s\"",
+ GetUserNameFromId(GetUserId(), false))));
+ LWLockRelease(BackgroundWorkerLock);
+ return false;

Unless I've misunderstood something, it seems that this is going to
give random errors to users which might only occur when they run
queries against larger tables. Part of why it made sense not to count
workers towards the CONNECTION LIMIT was the fact that we didn't want
to throw these random errors when workers could not be obtained when
we take precautions in other places to just silently have fewer
workers. There's lots of discussions earlier in this thread about this
and I don't think anyone was in favour of queries randomly working
sometimes.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Michael Paquier
michael.paquier@gmail.com
In reply to: David Rowley (#16)
Re: [HACKERS] CONNECTION LIMIT and Parallel Query don't play well together

On Fri, Aug 25, 2017 at 6:25 PM, David Rowley
<david.rowley@2ndquadrant.com> wrote:

I just had a quick glance over this and wondered about 2 things.

1. Why a GUC and not a new per user option so it can be configured
differently for different users? Something like ALTER USER ... WORKER
LIMIT <n>; perhaps. I mentioned about this up-thread a bit.

2.

+ if (count > max_worker_processes_per_user)
+ {
+ ereport(LOG,
+ (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
+ errmsg("too many worker processes for role \"%s\"",
+ GetUserNameFromId(GetUserId(), false))));
+ LWLockRelease(BackgroundWorkerLock);
+ return false;

Unless I've misunderstood something, it seems that this is going to
give random errors to users which might only occur when they run
queries against larger tables. Part of why it made sense not to count
workers towards the CONNECTION LIMIT was the fact that we didn't want
to throw these random errors when workers could not be obtained when
we take precautions in other places to just silently have fewer
workers. There's lots of discussions earlier in this thread about this
and I don't think anyone was in favour of queries randomly working
sometimes.

The status of the patch is incorrect I think. This was marked as needs
review but I can see some input here which has remained unanswered for
three months. I am marking this patch as returned with feedback.
--
Michael