pgsql: Add max_parallel_workers GUC.
Add max_parallel_workers GUC.
Increase the default value of the existing max_worker_processes GUC
from 8 to 16, and add a new max_parallel_workers GUC with a maximum
of 8. This way, even if the maximum amount of parallel query is
happening, there is still room for background workers that do other
things, as originally envisioned when max_worker_processes was added.
Julien Rouhaud, reviewed by Amit Kapila and by revised by me.
Branch
------
master
Details
-------
http://git.postgresql.org/pg/commitdiff/b460f5d6693103076dc554aa7cbb96e1e53074f9
Modified Files
--------------
doc/src/sgml/config.sgml | 23 ++++++++++++--
src/backend/access/transam/parallel.c | 3 +-
src/backend/postmaster/bgworker.c | 45 ++++++++++++++++++++++++++-
src/backend/utils/init/globals.c | 3 +-
src/backend/utils/misc/guc.c | 12 ++++++-
src/backend/utils/misc/postgresql.conf.sample | 3 +-
src/bin/pg_resetxlog/pg_resetxlog.c | 4 +--
src/include/miscadmin.h | 1 +
src/include/postmaster/bgworker.h | 9 ++++++
9 files changed, 93 insertions(+), 10 deletions(-)
--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers
Robert Haas <rhaas@postgresql.org> writes:
Add max_parallel_workers GUC.
Increase the default value of the existing max_worker_processes GUC
from 8 to 16, and add a new max_parallel_workers GUC with a maximum
of 8.
This broke buildfarm members coypu and sidewinder. It appears the reason
is that those machines can only get up to 30 server processes, cf this
pre-failure initdb trace:
creating directory data-C ... ok
creating subdirectories ... ok
selecting default max_connections ... 30
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... sysv
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
So you've reduced their available number of regular backends to less than
20, which is why their tests are now dotted with
! psql: FATAL: sorry, too many clients already
There may well be other machines with similar issues; we won't know until
today's other breakage clears.
We could ask the owners of these machines to reduce the test parallelism
via the MAX_CONNECTIONS makefile variable, but I wonder whether this
increase was well thought out in the first place.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Dec 2, 2016, at 4:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <rhaas@postgresql.org> writes:
Add max_parallel_workers GUC.
Increase the default value of the existing max_worker_processes GUC
from 8 to 16, and add a new max_parallel_workers GUC with a maximum
of 8.This broke buildfarm members coypu and sidewinder. It appears the reason
is that those machines can only get up to 30 server processes, cf this
pre-failure initdb trace:creating directory data-C ... ok
creating subdirectories ... ok
selecting default max_connections ... 30
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... sysv
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... okSo you've reduced their available number of regular backends to less than
20, which is why their tests are now dotted with! psql: FATAL: sorry, too many clients already
There may well be other machines with similar issues; we won't know until
today's other breakage clears.We could ask the owners of these machines to reduce the test parallelism
via the MAX_CONNECTIONS makefile variable, but I wonder whether this
increase was well thought out in the first place.
Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes, but evidently this wasn't the way to get there. Or else we should just give up on that thought.
...Robert
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 12/2/16 2:34 PM, Robert Haas wrote:
Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes, but evidently this wasn't the way to get there. Or else we should just give up on that thought.
Could the defaults be scaled based on max_connections, with a max on the
default?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Jim Nasby <Jim.Nasby@BlueTreble.com> writes:
On 12/2/16 2:34 PM, Robert Haas wrote:
Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes, but evidently this wasn't the way to get there. Or else we should just give up on that thought.
Could the defaults be scaled based on max_connections, with a max on the
default?
Might work. We've had very bad luck with GUC variables with
interdependent defaults, but maybe the user-visible knob could be a
percentage of max_connections or something like that.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Dec 2, 2016, at 5:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Jim Nasby <Jim.Nasby@BlueTreble.com> writes:
On 12/2/16 2:34 PM, Robert Haas wrote:
Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes, but evidently this wasn't the way to get there. Or else we should just give up on that thought.Could the defaults be scaled based on max_connections, with a max on the
default?Might work. We've had very bad luck with GUC variables with
interdependent defaults, but maybe the user-visible knob could be a
percentage of max_connections or something like that.
Seems like overkill. Let's just reduce the values a bit.
...Robert
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
On Dec 2, 2016, at 5:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Might work. We've had very bad luck with GUC variables with
interdependent defaults, but maybe the user-visible knob could be a
percentage of max_connections or something like that.
Seems like overkill. Let's just reduce the values a bit.
Agreed. How about max_worker_processes = 8 as before, with
max_parallel_workers of maybe 6? Or just set them both to 8.
I'm not sure that the out-of-the-box configuration needs to
leave backend slots locked down for non-parallel worker processes.
Any such process would require manual configuration anyway no?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Dec 3, 2016 at 11:43 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Dec 2, 2016, at 5:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Might work. We've had very bad luck with GUC variables with
interdependent defaults, but maybe the user-visible knob could be a
percentage of max_connections or something like that.Seems like overkill. Let's just reduce the values a bit.
Agreed. How about max_worker_processes = 8 as before, with
max_parallel_workers of maybe 6? Or just set them both to 8.
I'm not sure that the out-of-the-box configuration needs to
leave backend slots locked down for non-parallel worker processes.
Any such process would require manual configuration anyway no?
Sure, you'd have to arrange to load the relevant module somehow. It
would be nicer if we didn't have to require additional configuration
beyond that, but I'm not prepared to ask BF owners to reconfigure
their systems just for that marginal advantage, so I think we'll have
to live with this for now.
I pushed a commit backing out the increased default, which I
originally suggested. Mea culpa.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers