Allowing parallel-safe initplans
Pursuant to the discussion at [1]/messages/by-id/ZDVt6MaNWkRDO1LQ@telsasoft.com, here's a patch that removes our
old restriction that a plan node having initPlans can't be marked
parallel-safe (dating to commit ab77a5a45). That was really a special
case of the fact that we couldn't transmit subplans to parallel
workers at all. We fixed that in commit 5e6d8d2bb and follow-ons,
but this case never got addressed.
Along the way, this also takes care of some sloppiness about updating
path costs to match when we move initplans from one place to another
during createplan.c and setrefs.c. Since all the planning decisions are
already made by that point, this is just cosmetic; but it seems good
to keep EXPLAIN output consistent with where the initplans are.
The diff in query_planner() might be worth remarking on. I found
that one because after fixing things to allow parallel-safe initplans,
one partition_prune test case changed plans (as shown in the patch)
--- but only when debug_parallel_query was active. The reason
proved to be that we only bothered to mark Result nodes as potentially
parallel-safe when debug_parallel_query is on. This neglects the
fact that parallel-safety may be of interest for a sub-query even
though the Result itself doesn't parallelize.
There's only one existing test case that visibly changes plan with
these changes. The new plan is clearly saner-looking than before,
and testing with some data loaded into the table confirms that it
is faster. I'm not sure if it's worth devising more test cases.
I'll park this in the July commitfest.
regards, tom lane
Attachments:
v1-allow-parallel-safe-initplans.patchtext/x-diff; charset=us-ascii; name=v1-allow-parallel-safe-initplans.patchDownload+150-70
On Wed, Apr 12, 2023 at 12:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Pursuant to the discussion at [1], here's a patch that removes our
old restriction that a plan node having initPlans can't be marked
parallel-safe (dating to commit ab77a5a45). That was really a special
case of the fact that we couldn't transmit subplans to parallel
workers at all. We fixed that in commit 5e6d8d2bb and follow-ons,
but this case never got addressed.
Nice.
Along the way, this also takes care of some sloppiness about updating
path costs to match when we move initplans from one place to another
during createplan.c and setrefs.c. Since all the planning decisions are
already made by that point, this is just cosmetic; but it seems good
to keep EXPLAIN output consistent with where the initplans are.
OK. It would be nicer if we had a more principled approach here, but
that's a job for another day.
There's only one existing test case that visibly changes plan with
these changes. The new plan is clearly saner-looking than before,
and testing with some data loaded into the table confirms that it
is faster. I'm not sure if it's worth devising more test cases.
It seems like it would be nice to see one or two additional scenarios
where these changes bring a benefit, with different kinds of plan
shapes.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, Apr 13, 2023 at 12:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Pursuant to the discussion at [1], here's a patch that removes our
old restriction that a plan node having initPlans can't be marked
parallel-safe (dating to commit ab77a5a45). That was really a special
case of the fact that we couldn't transmit subplans to parallel
workers at all. We fixed that in commit 5e6d8d2bb and follow-ons,
but this case never got addressed.
The patch looks good to me. Some comments from me:
* For the diff in standard_planner, I was wondering why not move the
initPlans up to the Gather node, just as we did before. So I tried that
way but did not notice the breakage of regression tests as stated in the
comments. Would you please confirm that?
* Not related to this patch. In SS_make_initplan_from_plan, the comment
says that the node's parParam and args lists remain empty. I wonder if
we need to explicitly set node->parParam and node->args to NIL before
that comment, or can we depend on makeNode to initialize them to NIL?
There's only one existing test case that visibly changes plan with
these changes. The new plan is clearly saner-looking than before,
and testing with some data loaded into the table confirms that it
is faster. I'm not sure if it's worth devising more test cases.
I also think it's better to have more test cases covering this change.
Thanks
Richard
Richard Guo <guofenglinux@gmail.com> writes:
* For the diff in standard_planner, I was wondering why not move the
initPlans up to the Gather node, just as we did before. So I tried that
way but did not notice the breakage of regression tests as stated in the
comments. Would you please confirm that?
Try it with debug_parallel_query = regress.
* Not related to this patch. In SS_make_initplan_from_plan, the comment
says that the node's parParam and args lists remain empty. I wonder if
we need to explicitly set node->parParam and node->args to NIL before
that comment, or can we depend on makeNode to initialize them to NIL?
I'm generally a fan of explicitly initializing fields, but the basic
argument for that is greppability. That comment serves the purpose,
so I don't feel a big need to change it.
regards, tom lane
On Thu, Apr 13, 2023 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Richard Guo <guofenglinux@gmail.com> writes:
* For the diff in standard_planner, I was wondering why not move the
initPlans up to the Gather node, just as we did before. So I tried that
way but did not notice the breakage of regression tests as stated in the
comments. Would you please confirm that?Try it with debug_parallel_query = regress.
Ah, I see. With DEBUG_PARALLEL_REGRESS the initPlans that move to the
Gather would become invisible along with the Gather node.
As I tried this, I found that the breakage caused by moving the
initPlans to the Gather node might be more than just being cosmetic.
Sometimes it may cause wrong results. As an example, consider
create table a (i int, j int);
insert into a values (1, 1);
create index on a(i, j);
set enable_seqscan to off;
set debug_parallel_query to on;
# select min(i) from a;
min
-----
0
(1 row)
As we can see, the result is not correct. And the plan looks like
# explain (verbose, costs off) select min(i) from a;
QUERY PLAN
-----------------------------------------------------------
Gather
Output: ($0)
Workers Planned: 1
Single Copy: true
InitPlan 1 (returns $0)
-> Limit
Output: a.i
-> Index Only Scan using a_i_j_idx on public.a
Output: a.i
Index Cond: (a.i IS NOT NULL)
-> Result
Output: $0
(12 rows)
The initPlan has been moved from the Result node to the Gather node. As
a result, when doing tuple projection for the Result node, we'd get a
ParamExecData entry with NULL execPlan. So the initPlan does not get
chance to be executed. And we'd get the output as the default value
from the ParamExecData entry, which is zero as shown.
So now I begin to wonder if this wrong result issue is possible to exist
in other places where we move initPlans. But I haven't tried hard to
verify that.
Thanks
Richard
On Mon, Apr 17, 2023 at 10:57 AM Richard Guo <guofenglinux@gmail.com> wrote:
The initPlan has been moved from the Result node to the Gather node. As
a result, when doing tuple projection for the Result node, we'd get a
ParamExecData entry with NULL execPlan. So the initPlan does not get
chance to be executed. And we'd get the output as the default value
from the ParamExecData entry, which is zero as shown.So now I begin to wonder if this wrong result issue is possible to exist
in other places where we move initPlans. But I haven't tried hard to
verify that.
I looked further into this issue and I believe other places are good.
The problem with this query is that the es/ecxt_param_exec_vals used to
store info about the initplan is not the same one as in the Result
node's expression context for projection, because we've forked a new
process for the parallel worker and then created and initialized a new
EState node, and allocated a new es_param_exec_vals array for the new
EState. When doing projection for the Result node, the current code
just goes ahead and accesses the new es_param_exec_vals, thus fails to
retrieve the info about the initplan. Hmm, I doubt this is sensible.
So now it seems that the breakage of regression tests is more severe
than being cosmetic. I wonder if we need to update the comments to
indicate the potential wrong results issue if we move the initPlans to
the Gather node.
Thanks
Richard
Richard Guo <guofenglinux@gmail.com> writes:
So now it seems that the breakage of regression tests is more severe
than being cosmetic. I wonder if we need to update the comments to
indicate the potential wrong results issue if we move the initPlans to
the Gather node.
I wondered about that too, but how come neither of us saw non-cosmetic
failures (ie, actual query output changes not just EXPLAIN changes)
when we tried this? Maybe the case is somehow not exercised, but if
so I'm more worried about adding regression tests than comments.
I think actually that it does work beyond the EXPLAIN weirdness,
because since e89a71fb4 the Gather machinery knows how to transmit
the values of Params listed in Gather.initParam to workers, and that
is filled in setrefs.c in a way that looks like it'd work regardless
of whether the Gather appeared organically or was stuck on by the
debug_parallel_query hackery. I've not tried to verify that
directly though.
regards, tom lane
On Mon, Apr 17, 2023 at 11:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Richard Guo <guofenglinux@gmail.com> writes:
So now it seems that the breakage of regression tests is more severe
than being cosmetic. I wonder if we need to update the comments to
indicate the potential wrong results issue if we move the initPlans to
the Gather node.I wondered about that too, but how come neither of us saw non-cosmetic
failures (ie, actual query output changes not just EXPLAIN changes)
when we tried this? Maybe the case is somehow not exercised, but if
so I'm more worried about adding regression tests than comments.
Sorry I forgot to mention that I did see query output changes after
moving the initPlans to the Gather node. First of all let me make sure
I was doing it in the right way. On the base of the patch, I was using
the diff as below
if (debug_parallel_query != DEBUG_PARALLEL_OFF &&
- top_plan->parallel_safe && top_plan->initPlan == NIL)
+ top_plan->parallel_safe)
{
Gather *gather = makeNode(Gather);
+ gather->plan.initPlan = top_plan->initPlan;
+ top_plan->initPlan = NIL;
+
gather->plan.targetlist = top_plan->targetlist;
And then I changed the default value of debug_parallel_query to
DEBUG_PARALLEL_REGRESS. And then I just ran 'make installcheck' and saw
the query output changes.
I think actually that it does work beyond the EXPLAIN weirdness,
because since e89a71fb4 the Gather machinery knows how to transmit
the values of Params listed in Gather.initParam to workers, and that
is filled in setrefs.c in a way that looks like it'd work regardless
of whether the Gather appeared organically or was stuck on by the
debug_parallel_query hackery. I've not tried to verify that
directly though.
It seems that in this case the top_plan does not have any extParam, so
the Gather node that is added atop the top_plan does not have a chance
to get its initParam filled in set_param_references().
Thanks
Richard
Richard Guo <guofenglinux@gmail.com> writes:
On Mon, Apr 17, 2023 at 11:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wondered about that too, but how come neither of us saw non-cosmetic
failures (ie, actual query output changes not just EXPLAIN changes)
when we tried this?
Sorry I forgot to mention that I did see query output changes after
moving the initPlans to the Gather node.
Hmm, my memory was just of seeing the EXPLAIN output changes, but
maybe those got my attention to the extent of missing the others.
It seems that in this case the top_plan does not have any extParam, so
the Gather node that is added atop the top_plan does not have a chance
to get its initParam filled in set_param_references().
Oh, so maybe we'd need to copy up extParam as well? But it's largely
moot, since I don't see a good way to avoid breaking the EXPLAIN
output.
regards, tom lane
On Tue, Apr 18, 2023 at 9:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Richard Guo <guofenglinux@gmail.com> writes:
It seems that in this case the top_plan does not have any extParam, so
the Gather node that is added atop the top_plan does not have a chance
to get its initParam filled in set_param_references().Oh, so maybe we'd need to copy up extParam as well? But it's largely
moot, since I don't see a good way to avoid breaking the EXPLAIN
output.
Yeah, seems breaking the EXPLAIN output is inevitable if we move the
initPlans to the Gather node. So maybe we need to keep the logic as in
v1 patch, i.e. avoid adding a Gather node when top_plan has initPlans.
If we do so, I wonder if we need to explain the potential wrong results
issue in the comments.
Thanks
Richard
I wrote:
Richard Guo <guofenglinux@gmail.com> writes:
On Mon, Apr 17, 2023 at 11:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wondered about that too, but how come neither of us saw non-cosmetic
failures (ie, actual query output changes not just EXPLAIN changes)
when we tried this?
Sorry I forgot to mention that I did see query output changes after
moving the initPlans to the Gather node.
Hmm, my memory was just of seeing the EXPLAIN output changes, but
maybe those got my attention to the extent of missing the others.
I got around to trying this, and you are right, there are some wrong
query answers as well as EXPLAIN output changes. This mystified me
for awhile, because it sure looks like e89a71fb4 should have made it
work.
It seems that in this case the top_plan does not have any extParam, so
the Gather node that is added atop the top_plan does not have a chance
to get its initParam filled in set_param_references().
Eventually I noticed that all the failing cases were instances of
optimizing MIN()/MAX() aggregates into indexscans, and then I figured
out what the problem is: we substitute Params for the optimized-away
Aggref nodes in setrefs.c, *after* SS_finalize_plan has been run.
That means we fail to account for those Params in extParam/allParam
sets. We've gotten away with that up to now because such Params
could only appear where Aggrefs can appear, which is only in top-level
(above scans and joins) nodes, which generally don't have any of the
sorts of rescan optimizations that extParam/allParam bits control.
But this patch results in needing to have a correct extParam set for
the node just below Gather, and we don't. I am not sure whether there
are any reachable bugs without this patch; but there might be, or some
future optimization might introduce one.
It seems like the cleanest fix for this is to replace such optimized
Aggrefs in a separate tree scan before running SS_finalize_plan.
That's fairly annoying from a planner-runtime standpoint, although
we could skip the extra pass in the typical case where no minmax aggs
have been optimized.
I also thought about swapping the order of operations so that we
run SS_finalize_plan after setrefs.c. That falls down because of
set_param_references itself, which requires those bits to be
calculated already. But maybe we could integrate that computation
into SS_finalize_plan instead? There's certainly nothing very
pretty about the way it's done now.
A band-aid fix that seemed to work is to have set_param_references
consult the Gather's own allParam set instead of the extParam set
of its child. That feels like a kluge though, and it would not
help matters for any future bug involving another usage of those
bitmapsets.
BTW, there is another way in which setrefs.c can inject PARAM_EXEC
Params: it can translate PARAM_MULTIEXPR Params into those. So
those won't be accounted for either. I think this is probably
not a problem, especially not after 87f3667ec got us out of the
business of treating those like initPlan outputs. But it does
seem like "you can't inject PARAM_EXEC Params during setrefs.c"
would not be a workable coding rule; it's too tempting to do
exactly that.
So at this point my inclination is to try to move SS_finalize_plan
to run after setrefs.c, but I've not written any code yet. I'm
not sure if we'd need to back-patch that, but it at least seems
like important future-proofing.
None of this would lead me to want to move initPlans to
Gather nodes injected by debug_parallel_query, though.
We'd have to kluge something to keep the EXPLAIN output
looking the same, and that seems like a kluge too many.
What I am wondering is if the issue is reachable for
Gather nodes that are built organically by the regular
planner paths. It seems like that might be the case,
either now or after applying this patch.
regards, tom lane
I wrote:
Eventually I noticed that all the failing cases were instances of
optimizing MIN()/MAX() aggregates into indexscans, and then I figured
out what the problem is: we substitute Params for the optimized-away
Aggref nodes in setrefs.c, *after* SS_finalize_plan has been run.
That means we fail to account for those Params in extParam/allParam
sets. We've gotten away with that up to now because such Params
could only appear where Aggrefs can appear, which is only in top-level
(above scans and joins) nodes, which generally don't have any of the
sorts of rescan optimizations that extParam/allParam bits control.
But this patch results in needing to have a correct extParam set for
the node just below Gather, and we don't. I am not sure whether there
are any reachable bugs without this patch; but there might be, or some
future optimization might introduce one.
It seems like the cleanest fix for this is to replace such optimized
Aggrefs in a separate tree scan before running SS_finalize_plan.
That's fairly annoying from a planner-runtime standpoint, although
we could skip the extra pass in the typical case where no minmax aggs
have been optimized.
I also thought about swapping the order of operations so that we
run SS_finalize_plan after setrefs.c. That falls down because of
set_param_references itself, which requires those bits to be
calculated already. But maybe we could integrate that computation
into SS_finalize_plan instead? There's certainly nothing very
pretty about the way it's done now.
I tried both of those and concluded they'd be too messy for a patch
that we might find ourselves having to back-patch. So 0001 attached
fixes it by teaching SS_finalize_plan to treat optimized MIN()/MAX()
aggregates as if they were already Params. It's slightly annoying
to have knowledge of that optimization metastasizing into another
place, but the alternatives are even less palatable.
Having done that, if you adjust 0002 to inject Gathers even when
debug_parallel_query = regress, the only diffs in the core regression
tests are that some initPlans disappear from EXPLAIN output. The
outputs of the actual queries are still correct, demonstrating that
e89a71fb4 does indeed make it work as long as the param bitmapsets
are correct.
I'm still resistant to the idea of kluging EXPLAIN to the extent
of hiding the EXPLAIN output changes. It wouldn't be that hard
to do really, but I worry that such a kluge might hide real problems
in future. So what I did in 0002 was to allow initPlans for an
injected Gather only if debug_parallel_query = on, so that there
will be a place for EXPLAIN to show them. Other than the changes
in that area, 0002 is the same as the previous patch.
regards, tom lane
Attachments:
v2-0001-Account-for-optimized-MinMax-aggregates-during-SS.patchtext/x-diff; charset=us-ascii; name*0=v2-0001-Account-for-optimized-MinMax-aggregates-during-SS.p; name*1=atchDownload+66-30
v2-0002-Allow-plan-nodes-with-initPlans-to-be-considered-.patchtext/x-diff; charset=us-ascii; name*0=v2-0002-Allow-plan-nodes-with-initPlans-to-be-considered-.p; name*1=atchDownload+169-72
On Fri, Jul 14, 2023 at 5:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I tried both of those and concluded they'd be too messy for a patch
that we might find ourselves having to back-patch. So 0001 attached
fixes it by teaching SS_finalize_plan to treat optimized MIN()/MAX()
aggregates as if they were already Params. It's slightly annoying
to have knowledge of that optimization metastasizing into another
place, but the alternatives are even less palatable.
I tried with 0001 patch and can confirm that the wrong result issue
shown in [1]/messages/by-id/CAMbWs48p-WpnLdR9ZQ4QsHZP_a-P0rktAYo4Z3uOHUAkH3fjQg@mail.gmail.com is fixed.
explain (costs off, verbose) select min(i) from a;
QUERY PLAN
-----------------------------------------------------------
Gather
Output: ($0)
Workers Planned: 1
Params Evaluated: $0 <==== initplan params
Single Copy: true
InitPlan 1 (returns $0)
-> Limit
Output: a.i
-> Index Only Scan using a_i_j_idx on public.a
Output: a.i
Index Cond: (a.i IS NOT NULL)
-> Result
Output: $0
(13 rows)
Now the Gather.initParam is filled and e89a71fb4 does its work to
transmit the Params to workers.
So +1 to 0001 patch.
I'm still resistant to the idea of kluging EXPLAIN to the extent
of hiding the EXPLAIN output changes. It wouldn't be that hard
to do really, but I worry that such a kluge might hide real problems
in future. So what I did in 0002 was to allow initPlans for an
injected Gather only if debug_parallel_query = on, so that there
will be a place for EXPLAIN to show them. Other than the changes
in that area, 0002 is the same as the previous patch.
Also +1 to 0002 patch.
[1]: /messages/by-id/CAMbWs48p-WpnLdR9ZQ4QsHZP_a-P0rktAYo4Z3uOHUAkH3fjQg@mail.gmail.com
/messages/by-id/CAMbWs48p-WpnLdR9ZQ4QsHZP_a-P0rktAYo4Z3uOHUAkH3fjQg@mail.gmail.com
Thanks
Richard
Richard Guo <guofenglinux@gmail.com> writes:
So +1 to 0001 patch.
Also +1 to 0002 patch.
Pushed, thanks for looking at it!
regards, tom lane
Le 12/04/2023 à 20:06, Robert Haas a écrit :
There's only one existing test case that visibly changes plan with
these changes. The new plan is clearly saner-looking than before,
and testing with some data loaded into the table confirms that it
is faster. I'm not sure if it's worth devising more test cases.It seems like it would be nice to see one or two additional scenarios
where these changes bring a benefit, with different kinds of plan
shapes.
Hi,
Currently working on illustrating some points in the v17 release notes,
I'm trying to come up with a sexier scenario than the test case, but it
seems that with a non-trivial InitPlan (2nd explain below), we still
have a non-parallel Append node at the top:
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 10;
CREATE TABLE foo (a int) PARTITION by LIST(a);
CREATE TABLE foo_0 PARTITION OF foo FOR VALUES IN (0);
CREATE TABLE foo_1 PARTITION OF foo FOR VALUES IN (1);
EXPLAIN (COSTS OFF)
SELECT * FROM foo WHERE a = (SELECT 2)
UNION ALL
SELECT * FROM foo WHERE a = 0;
QUERY PLAN
-----------------------------------------------------
Gather
Workers Planned: 2
-> Parallel Append
-> Parallel Append
InitPlan 1
-> Result
-> Parallel Seq Scan on foo_0 foo_1
Filter: (a = (InitPlan 1).col1)
-> Parallel Seq Scan on foo_1 foo_2
Filter: (a = (InitPlan 1).col1)
-> Parallel Seq Scan on foo_0 foo_3
Filter: (a = 0)
EXPLAIN (COSTS OFF)
SELECT * FROM foo WHERE a = (SELECT max(a) FROM foo)
UNION ALL
SELECT * FROM foo WHERE a = 0;
QUERY PLAN
------------------------------------------------------------------------
Append
-> Gather
Workers Planned: 2
InitPlan 1
-> Finalize Aggregate
-> Gather
Workers Planned: 2
-> Partial Aggregate
-> Parallel Append
-> Parallel Seq Scan on foo_0 foo_5
-> Parallel Seq Scan on foo_1 foo_6
-> Parallel Append
-> Parallel Seq Scan on foo_0 foo_1
Filter: (a = (InitPlan 1).col1)
-> Parallel Seq Scan on foo_1 foo_2
Filter: (a = (InitPlan 1).col1)
-> Gather
Workers Planned: 1
-> Parallel Seq Scan on foo_0 foo_3
Filter: (a = 0)
Did I miss something?
Best regards,
Frédéric