Parallel query hangs after a smart shutdown is issued
Hi,
After a smart shutdown is issued(with pg_ctl), run a parallel query,
then the query hangs. The postmaster doesn't inform backends about the
smart shutdown(see pmdie() -> SIGTERM -> BACKEND_TYPE_NORMAL are not
informed), so if they request parallel workers, the postmaster is
unable to fork any workers as it's status(pmState) gets changed to
PM_WAIT_BACKENDS(see maybe_start_bgworkers() -->
bgworker_should_start_now() returns false).
Few ways we could solve this:
1. Do we want to disallow parallelism when there is a pending smart
shutdown? - If yes, then, we can let the postmaster know the regular
backends whenever a smart shutdown is received and the backends use
this info to not consider parallelism. If we use SIGTERM to notify,
since the backends have die() as handlers, they just cancel the
queries which is again an inconsistent behaviour[1]/messages/by-id/CALj2ACWTAQ2uWgj4yRFLQ6t15MMYV_uc3GCT5F5p8R9pzrd7yQ@mail.gmail.com. Would any other
signal like SIGUSR2(I think it's currently ignored by backends) be
used here? If the signals are overloaded, can we multiplex SIGTERM
similar to SIGUSR1? If we don't want to use signals at all, the
postmaster can make an entry of it's status in bg worker shared memory
i.e. BackgroundWorkerData, RegisterDynamicBackgroundWorker() can
simply return, without requesting the postmaster for parallel workers.
2. If we want to allow parallelism, then, we can tweak
bgworker_should_start_now(), detect that the pending bg worker fork
requests are for parallelism, and let the postmaster start the
workers.
Thoughts?
Note: this issue is identified while working on [1]/messages/by-id/CALj2ACWTAQ2uWgj4yRFLQ6t15MMYV_uc3GCT5F5p8R9pzrd7yQ@mail.gmail.com
[1]: /messages/by-id/CALj2ACWTAQ2uWgj4yRFLQ6t15MMYV_uc3GCT5F5p8R9pzrd7yQ@mail.gmail.com
With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com
On Thu, Aug 13, 2020 at 3:32 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
After a smart shutdown is issued(with pg_ctl), run a parallel query,
then the query hangs. The postmaster doesn't inform backends about the
smart shutdown(see pmdie() -> SIGTERM -> BACKEND_TYPE_NORMAL are not
informed), so if they request parallel workers, the postmaster is
unable to fork any workers as it's status(pmState) gets changed to
PM_WAIT_BACKENDS(see maybe_start_bgworkers() -->
bgworker_should_start_now() returns false).Few ways we could solve this:
1. Do we want to disallow parallelism when there is a pending smart
shutdown? - If yes, then, we can let the postmaster know the regular
backends whenever a smart shutdown is received and the backends use
this info to not consider parallelism. If we use SIGTERM to notify,
since the backends have die() as handlers, they just cancel the
queries which is again an inconsistent behaviour[1]. Would any other
signal like SIGUSR2(I think it's currently ignored by backends) be
used here? If the signals are overloaded, can we multiplex SIGTERM
similar to SIGUSR1? If we don't want to use signals at all, the
postmaster can make an entry of it's status in bg worker shared memory
i.e. BackgroundWorkerData, RegisterDynamicBackgroundWorker() can
simply return, without requesting the postmaster for parallel workers.2. If we want to allow parallelism, then, we can tweak
bgworker_should_start_now(), detect that the pending bg worker fork
requests are for parallelism, and let the postmaster start the
workers.Thoughts?
Hello Bharath,
Yeah, the current situation is not good. I think your option 2 sounds
better, because the documented behaviour of smart shutdown is that it
"lets existing sessions end their work normally". I think that means
that a query that is already running or allowed to start should be
able to start new workers and not have its existing workers
terminated. Arseny Sher wrote a couple of different patches to try
that last year, but they fell through the cracks:
/messages/by-id/CA+hUKGLrJij0BuFtHsMHT4QnLP54Z3S6vGVBCWR8A49+NzctCw@mail.gmail.com
Thomas Munro <thomas.munro@gmail.com> writes:
On Thu, Aug 13, 2020 at 3:32 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:After a smart shutdown is issued(with pg_ctl), run a parallel query,
then the query hangs.
Yeah, the current situation is not good. I think your option 2 sounds
better, because the documented behaviour of smart shutdown is that it
"lets existing sessions end their work normally". I think that means
that a query that is already running or allowed to start should be
able to start new workers and not have its existing workers
terminated. Arseny Sher wrote a couple of different patches to try
that last year, but they fell through the cracks:
/messages/by-id/CA+hUKGLrJij0BuFtHsMHT4QnLP54Z3S6vGVBCWR8A49+NzctCw@mail.gmail.com
I already commented on this in the other thread that Bharath started [1]/messages/by-id/65189.1597181322@sss.pgh.pa.us.
I think the real issue here is why is the postmaster's SIGTERM handler
doing *anything* other than disallowing new connections? It seems quite
premature to kill support processes of any sort, not only parallel
workers. The documentation says existing clients are allowed to end
their work, not that their performance is going to be crippled until
they end.
So I looked at moving the kills of all the support processes to happen
after we detect that there are no remaining regular backends, and it
seems to not be too hard. I realized that the existing PM_WAIT_READONLY
state is doing that already, but just for a subset of support processes
that it thinks might be active in hot standby mode. So what I did in the
attached was to repurpose that state as "PM_WAIT_CLIENTS", which does the
right thing in either regular or hot standby mode.
One other thing I changed here was to remove PM_WAIT_READONLY from the
set of states in which we'll allow promotion to occur or a new walreceiver
to start. I'm not convinced that either of those behaviors aren't
bugs; although if someone thinks they're right, we can certainly put
back PM_WAIT_CLIENTS in those checks. (But, for example, it does not
appear possible to reach PM_WAIT_READONLY/PM_WAIT_CLIENTS state with
Shutdown == NoShutdown, so the test in MaybeStartWalReceiver sure looks
like confusingly dead code to me. If we do want to allow restarting
the walreceiver in this state, the Shutdown condition needs fixed.)
regards, tom lane
Attachments:
keep-bg-processes-alive-in-smart-shutdown-1.patchtext/x-diff; charset=us-ascii; name=keep-bg-processes-alive-in-smart-shutdown-1.patchDownload+47-52
On Thu, Aug 13, 2020 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Thomas Munro <thomas.munro@gmail.com> writes:
On Thu, Aug 13, 2020 at 3:32 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:After a smart shutdown is issued(with pg_ctl), run a parallel query,
then the query hangs.Yeah, the current situation is not good. I think your option 2 sounds
better, because the documented behaviour of smart shutdown is that it
"lets existing sessions end their work normally". I think that means
that a query that is already running or allowed to start should be
able to start new workers and not have its existing workers
terminated. Arseny Sher wrote a couple of different patches to try
that last year, but they fell through the cracks:
/messages/by-id/CA+hUKGLrJij0BuFtHsMHT4QnLP54Z3S6vGVBCWR8A49+NzctCw@mail.gmail.comI already commented on this in the other thread that Bharath started [1].
I think the real issue here is why is the postmaster's SIGTERM handler
doing *anything* other than disallowing new connections? It seems quite
premature to kill support processes of any sort, not only parallel
workers. The documentation says existing clients are allowed to end
their work, not that their performance is going to be crippled until
they end.
Right. It's pretty strange that during smart shutdown, you could run
for hours with no autovacuum, walwriter, bgwriter. I guess Arseny and
I were looking for a minimal change to fix a bug, but clearly there's a
more general problem and this change works out cleaner anyway.
So I looked at moving the kills of all the support processes to happen
after we detect that there are no remaining regular backends, and it
seems to not be too hard. I realized that the existing PM_WAIT_READONLY
state is doing that already, but just for a subset of support processes
that it thinks might be active in hot standby mode. So what I did in the
attached was to repurpose that state as "PM_WAIT_CLIENTS", which does the
right thing in either regular or hot standby mode.
Make sense, works as expected and passes check-world.
One other thing I changed here was to remove PM_WAIT_READONLY from the
set of states in which we'll allow promotion to occur or a new walreceiver
to start. I'm not convinced that either of those behaviors aren't
bugs; although if someone thinks they're right, we can certainly put
back PM_WAIT_CLIENTS in those checks. (But, for example, it does not
appear possible to reach PM_WAIT_READONLY/PM_WAIT_CLIENTS state with
Shutdown == NoShutdown, so the test in MaybeStartWalReceiver sure looks
like confusingly dead code to me. If we do want to allow restarting
the walreceiver in this state, the Shutdown condition needs fixed.)
If a walreceiver is allowed to run, why should it not be allowed to
restart? Yeah, I suppose that other test'd need to be Shutdown <=
SmartShutdown, just like we do in SIGHUP_handler(). Looking at other
places where we test Shutdown == NoShutdown, one that jumps out is the
autovacuum wraparound defence stuff and the nearby
PMSIGNAL_START_AUTOVAC_WORKER code.
Thomas Munro <thomas.munro@gmail.com> writes:
On Thu, Aug 13, 2020 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
One other thing I changed here was to remove PM_WAIT_READONLY from the
set of states in which we'll allow promotion to occur or a new walreceiver
to start. I'm not convinced that either of those behaviors aren't
bugs; although if someone thinks they're right, we can certainly put
back PM_WAIT_CLIENTS in those checks. (But, for example, it does not
appear possible to reach PM_WAIT_READONLY/PM_WAIT_CLIENTS state with
Shutdown == NoShutdown, so the test in MaybeStartWalReceiver sure looks
like confusingly dead code to me. If we do want to allow restarting
the walreceiver in this state, the Shutdown condition needs fixed.)
If a walreceiver is allowed to run, why should it not be allowed to
restart?
I'd come to about the same conclusion after thinking more, so v2
attached undoes that change. I think putting off promotion is fine
though; it'll get handled at the next postmaster start. (It looks
like the state machine would just proceed to exit anyway if we allowed
the promotion, but that's a hard-to-test state transition that we could
do without.)
Yeah, I suppose that other test'd need to be Shutdown <=
SmartShutdown, just like we do in SIGHUP_handler(). Looking at other
places where we test Shutdown == NoShutdown, one that jumps out is the
autovacuum wraparound defence stuff and the nearby
PMSIGNAL_START_AUTOVAC_WORKER code.
Oh, excellent point! I'd not thought to look at tests of the Shutdown
variable, but yeah, those should be <= SmartShutdown if we want autovac
to continue to operate in this state.
I also noticed that where reaper() is dealing with startup process
exit(3), it unconditionally sets Shutdown = SmartShutdown which seems
pretty bogus; that variable's value should never be allowed to decrease,
but this could cause it. In the attached I did
StartupStatus = STARTUP_NOT_RUNNING;
- Shutdown = SmartShutdown;
+ Shutdown = Max(Shutdown, SmartShutdown);
TerminateChildren(SIGTERM);
But given that it's forcing immediate termination of all backends,
I wonder if that's not more like a FastShutdown? (Scary here is
that the coverage report shows we're not testing this path, so who
knows if it works at all.)
regards, tom lane
Attachments:
keep-bg-processes-alive-in-smart-shutdown-2.patchtext/x-diff; charset=us-ascii; name=keep-bg-processes-alive-in-smart-shutdown-2.patchDownload+52-57
I wrote:
Oh, excellent point! I'd not thought to look at tests of the Shutdown
variable, but yeah, those should be <= SmartShutdown if we want autovac
to continue to operate in this state.
On looking closer, there's another problem: setting start_autovac_launcher
isn't enough to get the AV launcher to run, because ServerLoop() won't
launch it except in PM_RUN state. Likewise, the other "relaunch a dead
process" checks in ServerLoop() need to be generalized to support
relaunching background processes while we're waiting out the foreground
clients. So that leads me to the attached v3. I had to re-instantiate
PM_WAIT_READONLY as an alternate state to PM_WAIT_CLIENTS; these states
are about the same so far as PostmasterStateMachine is concerned, but
some of the should-we-launch-FOO checks care about the difference.
The various pmState tests are getting messy enough to cry out for
refactorization, but I've not attempted that here. There's enough
variance in the conditions for launching different subprocesses that
I'm not very sure what would be a nicer-looking way to write them.
regards, tom lane
Attachments:
keep-bg-processes-alive-in-smart-shutdown-3.patchtext/x-diff; charset=us-ascii; name=keep-bg-processes-alive-in-smart-shutdown-3.patchDownload+70-63
On Thu, Aug 13, 2020 at 8:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
Oh, excellent point! I'd not thought to look at tests of the Shutdown
variable, but yeah, those should be <= SmartShutdown if we want autovac
to continue to operate in this state.On looking closer, there's another problem: setting start_autovac_launcher
isn't enough to get the AV launcher to run, because ServerLoop() won't
launch it except in PM_RUN state. Likewise, the other "relaunch a dead
process" checks in ServerLoop() need to be generalized to support
relaunching background processes while we're waiting out the foreground
clients. So that leads me to the attached v3. I had to re-instantiate
PM_WAIT_READONLY as an alternate state to PM_WAIT_CLIENTS; these states
are about the same so far as PostmasterStateMachine is concerned, but
some of the should-we-launch-FOO checks care about the difference.
I think we also need:
@@ -2459,6 +2459,9 @@ canAcceptConnections(int backend_type)
{
if (pmState == PM_WAIT_BACKUP)
result = CAC_WAITBACKUP; /* allow
superusers only */
+ else if (Shutdown <= SmartShutdown &&
+ backend_type == BACKEND_TYPE_AUTOVAC)
+ result = CAC_OK;
else if (Shutdown > NoShutdown)
return CAC_SHUTDOWN; /* shutdown is pending */
else if (!FatalError &&
Retesting the original complaint, I think we need:
@@ -5911,11 +5912,11 @@ bgworker_should_start_now(BgWorkerStartTime start_time)
case PM_SHUTDOWN_2:
case PM_SHUTDOWN:
case PM_WAIT_BACKENDS:
- case PM_WAIT_READONLY:
- case PM_WAIT_CLIENTS:
case PM_WAIT_BACKUP:
break;
+ case PM_WAIT_READONLY:
+ case PM_WAIT_CLIENTS:
case PM_RUN:
if (start_time == BgWorkerStart_RecoveryFinished)
return true;
Thomas Munro <thomas.munro@gmail.com> writes:
I think we also need:
+ else if (Shutdown <= SmartShutdown && + backend_type == BACKEND_TYPE_AUTOVAC) + result = CAC_OK;
Hm, ok.
Retesting the original complaint, I think we need:
@@ -5911,11 +5912,11 @@ bgworker_should_start_now(BgWorkerStartTime start_time) + case PM_WAIT_READONLY: + case PM_WAIT_CLIENTS: case PM_RUN:
So the question here is whether time-based bgworkers should be allowed to
restart in this scenario. I'm not quite sure --- depending on what the
bgworker's purpose is, you could make an argument either way, I think.
Do we need some way to control that?
In any case, we'd want to treat PM_WAIT_READONLY like PM_HOT_STANDBY not
PM_RUN, no? Also, the state before PM_WAIT_READONLY could have been
PM_RECOVERY or PM_STARTUP, in which case we don't really want to think
it's like PM_HOT_STANDBY either; only the BgWorkerStart_PostmasterStart
case should be accepted. That suggests that we need yet another pmState,
or else a more thoroughgoing refactoring of how the postmaster's state
is represented.
regards, tom lane
On Thu, Aug 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Thomas Munro <thomas.munro@gmail.com> writes:
@@ -5911,11 +5912,11 @@ bgworker_should_start_now(BgWorkerStartTime start_time) + case PM_WAIT_READONLY: + case PM_WAIT_CLIENTS: case PM_RUN:So the question here is whether time-based bgworkers should be allowed to
restart in this scenario. I'm not quite sure --- depending on what the
bgworker's purpose is, you could make an argument either way, I think.
Do we need some way to control that?
I'm not sure why any bgworker would actually want to be shut down or
not restarted during the twilight zone of a smart shutdown though --
if users can do arbitrary stuff, why can't supporting workers carry
on? For example, a hypothetical extension that triggers vacuum freeze
at smarter times, or a wait event sampling extension, an FDW that uses
an extra worker to maintain a connection to something, etc etc could
all be things that a user is indirectly relying on to do their normal
work, and I struggle to think of an example of something that you
explicitly don't want running just because (in some sense) the server
*plans* to shut down, when the users get around to logging off. But
maybe I lack imagination.
In any case, we'd want to treat PM_WAIT_READONLY like PM_HOT_STANDBY not
PM_RUN, no?
Yeah, you're right.
Also, the state before PM_WAIT_READONLY could have been
PM_RECOVERY or PM_STARTUP, in which case we don't really want to think
it's like PM_HOT_STANDBY either; only the BgWorkerStart_PostmasterStart
case should be accepted. That suggests that we need yet another pmState,
or else a more thoroughgoing refactoring of how the postmaster's state
is represented.
Hmm.
Thomas Munro <thomas.munro@gmail.com> writes:
On Thu, Aug 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Also, the state before PM_WAIT_READONLY could have been
PM_RECOVERY or PM_STARTUP, in which case we don't really want to think
it's like PM_HOT_STANDBY either; only the BgWorkerStart_PostmasterStart
case should be accepted. That suggests that we need yet another pmState,
or else a more thoroughgoing refactoring of how the postmaster's state
is represented.
Hmm.
I experimented with separating the shutdown-in-progress state into a
separate variable, letting us actually reduce not increase the number of
pmStates. This way, PM_RUN and other states still apply until we're
ready to pull the shutdown trigger, so that we don't need to complicate
state-based decisions about launching auxiliary processes. This patch
also unifies the signal-sending for the smart and fast shutdown paths,
which seems like a nice improvement. I kind of like this, though I'm not
in love with the particular variable name I used here (smartShutState).
If we go this way, CAC_WAITBACKUP ought to be renamed since the PMState
it's named after no longer exists. I left that alone pending making
final naming choices, though.
regards, tom lane
Attachments:
keep-bg-processes-alive-in-smart-shutdown-4.patchtext/x-diff; charset=us-ascii; name=keep-bg-processes-alive-in-smart-shutdown-4.patchDownload+118-105
On Thu, Aug 13, 2020 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I experimented with separating the shutdown-in-progress state into a
separate variable, letting us actually reduce not increase the number of
pmStates. This way, PM_RUN and other states still apply until we're
ready to pull the shutdown trigger, so that we don't need to complicate
state-based decisions about launching auxiliary processes. This patch
also unifies the signal-sending for the smart and fast shutdown paths,
which seems like a nice improvement. I kind of like this, though I'm not
in love with the particular variable name I used here (smartShutState).
Makes sense. I tested this version on a primary and a replica and
verified that parallel workers launch, but I saw that autovacuum
workers still can't start without something like this:
@@ -2463,7 +2463,8 @@ canAcceptConnections(int backend_type)
* be returned until we have checked for too many children.
*/
if (smartShutState != SMART_NORMAL_USAGE &&
- backend_type != BACKEND_TYPE_BGWORKER)
+ backend_type != BACKEND_TYPE_BGWORKER &&
+ backend_type != BACKEND_TYPE_AUTOVAC)
{
if (smartShutState == SMART_SUPERUSER_ONLY)
result = CAC_WAITBACKUP; /* allow
superusers only */
@@ -2471,7 +2472,8 @@ canAcceptConnections(int backend_type)
return CAC_SHUTDOWN; /* shutdown is pending */
}
if (pmState != PM_RUN &&
- backend_type != BACKEND_TYPE_BGWORKER)
+ backend_type != BACKEND_TYPE_BGWORKER &&
+ backend_type != BACKEND_TYPE_AUTOVAC)
{
if (Shutdown > NoShutdown)
return CAC_SHUTDOWN; /* shutdown is pending */
Thomas Munro <thomas.munro@gmail.com> writes:
Makes sense. I tested this version on a primary and a replica and
verified that parallel workers launch, but I saw that autovacuum
workers still can't start without something like this:
@@ -2463,7 +2463,8 @@ canAcceptConnections(int backend_type) * be returned until we have checked for too many children. */ if (smartShutState != SMART_NORMAL_USAGE && - backend_type != BACKEND_TYPE_BGWORKER) + backend_type != BACKEND_TYPE_BGWORKER && + backend_type != BACKEND_TYPE_AUTOVAC)
Hmmm ... maybe that should be more like
if (smartShutState != SMART_NORMAL_USAGE &&
backend_type == BACKEND_TYPE_NORMAL)
(the adjacent comment needs adjustment too of course).
regards, tom lane
I wrote:
Hmmm ... maybe that should be more like
if (smartShutState != SMART_NORMAL_USAGE &&
backend_type == BACKEND_TYPE_NORMAL)
After some more rethinking and testing, here's a v5 that feels
fairly final to me. I realized that the logic in canAcceptConnections
was kind of backwards: it's better to check the main pmState restrictions
first and then the smart-shutdown restrictions afterwards.
I'm assuming we want to back-patch this as far as 9.6, where parallel
query began to be a thing.
regards, tom lane
Attachments:
keep-bg-processes-alive-in-smart-shutdown-5.patchtext/x-diff; charset=us-ascii; name=keep-bg-processes-alive-in-smart-shutdown-5.patchDownload+126-121
On Fri, Aug 14, 2020 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
Hmmm ... maybe that should be more like
if (smartShutState != SMART_NORMAL_USAGE &&
backend_type == BACKEND_TYPE_NORMAL)After some more rethinking and testing, here's a v5 that feels
fairly final to me. I realized that the logic in canAcceptConnections
was kind of backwards: it's better to check the main pmState restrictions
first and then the smart-shutdown restrictions afterwards.
LGTM. I tested this a bit today and it did what I expected for
parallel queries and vacuum, on primary and standby.
I'm assuming we want to back-patch this as far as 9.6, where parallel
query began to be a thing.
Yeah. I mean, it's more radical than what I thought we'd be doing for
this, but you could get into real trouble by running in smart shutdown
mode without the autovac infrastructure alive.
Thomas Munro <thomas.munro@gmail.com> writes:
On Fri, Aug 14, 2020 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
After some more rethinking and testing, here's a v5 that feels
fairly final to me. I realized that the logic in canAcceptConnections
was kind of backwards: it's better to check the main pmState restrictions
first and then the smart-shutdown restrictions afterwards.
LGTM. I tested this a bit today and it did what I expected for
parallel queries and vacuum, on primary and standby.
Thanks for reviewing! I'll do the back-patching and push this today.
I'm assuming we want to back-patch this as far as 9.6, where parallel
query began to be a thing.
Yeah. I mean, it's more radical than what I thought we'd be doing for
this, but you could get into real trouble by running in smart shutdown
mode without the autovac infrastructure alive.
Right. 99.99% of the time, that early shutdown doesn't really cause
any problems, which is how we've gotten away with it this long. But if
someone did leave session(s) running for a long time after issuing the
SIGTERM, the results could be bad --- and there's no obvious benefit
to the early shutdowns anyway.
regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes:
Thomas Munro <thomas.munro@gmail.com> writes:
On Fri, Aug 14, 2020 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
After some more rethinking and testing, here's a v5 that feels
fairly final to me. I realized that the logic in canAcceptConnections
was kind of backwards: it's better to check the main pmState restrictions
first and then the smart-shutdown restrictions afterwards.LGTM. I tested this a bit today and it did what I expected for
parallel queries and vacuum, on primary and standby.Thanks for reviewing! I'll do the back-patching and push this today.
FWIW, I've also looked through the patch and it's fine. Moderate testing
also found no issues, check-world works, bgws are started during smart
shutdown as expected. And surely this is better than the inital
shorthack of allowing only parallel workers.
Arseny Sher <a.sher@postgrespro.ru> writes:
FWIW, I've also looked through the patch and it's fine. Moderate testing
also found no issues, check-world works, bgws are started during smart
shutdown as expected. And surely this is better than the inital
shorthack of allowing only parallel workers.
Thanks, appreciate the extra look. It's pushed now.
regards, tom lane