cached plans and enable_partition_pruning

Started by Amit Langoteover 7 years ago5 messages
#1Amit Langote
Langote_Amit_f8@lab.ntt.co.jp
1 attachment(s)

It seems that because enable_partition_pruning's value is only checked
during planning, turning it off *after* a plan is created and cached does
not work as expected.

create table p (a int) partition by list (a);
create table p1 partition of p for values in (1);
create table p1 partition of p for values in (2);

-- force a generic plan so that run-time pruning is used in the plan
reset enable_partition_pruning;
set plan_cache_mode to force_generic_plan;
prepare p as select * from p where a = $1;

explain (costs off, analyze) execute p (1);
QUERY PLAN
────────────────────────────────────────────────────────────────
Append (actual time=0.079..0.106 rows=1 loops=1)
Subplans Removed: 1
-> Seq Scan on p2 (actual time=0.058..0.068 rows=1 loops=1)
Filter: (a = $1)
Planning Time: 17.573 ms
Execution Time: 0.396 ms
(6 rows)

set enable_partition_pruning to off;

explain (costs off, analyze) execute p (1);
QUERY PLAN
────────────────────────────────────────────────────────────────
Append (actual time=0.108..0.135 rows=1 loops=1)
Subplans Removed: 1
-> Seq Scan on p2 (actual time=0.017..0.028 rows=1 loops=1)
Filter: (a = $1)
Planning Time: 0.042 ms
Execution Time: 0.399 ms
(6 rows)

Pruning still occurs, whereas one would expect it not to, because the plan
(the Append node) contains run-time pruning information, which was
initialized because enable_partition_pruning was turned on when the plan
was created.

Should we check its value during execution too, as done in the attached?

Thanks,
Amit

Attachments:

check-enable_partition_pruning-in-executor.patchtext/plain; charset=UTF-8; name=check-enable_partition_pruning-in-executor.patchDownload
diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c
index 1fc55e90d7..d67abed330 100644
--- a/src/backend/executor/nodeAppend.c
+++ b/src/backend/executor/nodeAppend.c
@@ -61,6 +61,7 @@
 #include "executor/execPartition.h"
 #include "executor/nodeAppend.h"
 #include "miscadmin.h"
+#include "optimizer/cost.h"
 
 /* Shared state for parallel-aware Append. */
 struct ParallelAppendState
@@ -128,8 +129,12 @@ ExecInitAppend(Append *node, EState *estate, int eflags)
 	/* Let choose_next_subplan_* function handle setting the first subplan */
 	appendstate->as_whichplan = INVALID_SUBPLAN_INDEX;
 
-	/* If run-time partition pruning is enabled, then set that up now */
-	if (node->part_prune_info != NULL)
+	/*
+	 * If run-time partition pruning is enabled, then set that up now.
+	 * However, enable_partition_pruning may have been turned off since when
+	 * the plan was created, so recheck its value.
+	 */
+	if (node->part_prune_info != NULL && enable_partition_pruning)
 	{
 		PartitionPruneState *prunestate;
 
diff --git a/src/backend/executor/nodeMergeAppend.c b/src/backend/executor/nodeMergeAppend.c
index c7eb18e546..5252930b17 100644
--- a/src/backend/executor/nodeMergeAppend.c
+++ b/src/backend/executor/nodeMergeAppend.c
@@ -43,6 +43,7 @@
 #include "executor/nodeMergeAppend.h"
 #include "lib/binaryheap.h"
 #include "miscadmin.h"
+#include "optimizer/cost.h"
 
 /*
  * We have one slot for each item in the heap array.  We use SlotNumber
@@ -89,8 +90,12 @@ ExecInitMergeAppend(MergeAppend *node, EState *estate, int eflags)
 	mergestate->ps.ExecProcNode = ExecMergeAppend;
 	mergestate->ms_noopscan = false;
 
-	/* If run-time partition pruning is enabled, then set that up now */
-	if (node->part_prune_info != NULL)
+	/*
+	 * If run-time partition pruning is enabled, then set that up now.
+	 * However, enable_partition_pruning may have been turned off since when
+	 * the plan was created, so recheck its value.
+	 */
+	if (node->part_prune_info != NULL && enable_partition_pruning)
 	{
 		PartitionPruneState *prunestate;
 
#2Andres Freund
andres@anarazel.de
In reply to: Amit Langote (#1)
Re: cached plans and enable_partition_pruning

Hi,

On 2018-07-23 18:31:43 +0900, Amit Langote wrote:

It seems that because enable_partition_pruning's value is only checked
during planning, turning it off *after* a plan is created and cached does
not work as expected.

create table p (a int) partition by list (a);
create table p1 partition of p for values in (1);
create table p1 partition of p for values in (2);

-- force a generic plan so that run-time pruning is used in the plan
reset enable_partition_pruning;
set plan_cache_mode to force_generic_plan;
prepare p as select * from p where a = $1;

explain (costs off, analyze) execute p (1);
QUERY PLAN
────────────────────────────────────────────────────────────────
Append (actual time=0.079..0.106 rows=1 loops=1)
Subplans Removed: 1
-> Seq Scan on p2 (actual time=0.058..0.068 rows=1 loops=1)
Filter: (a = $1)
Planning Time: 17.573 ms
Execution Time: 0.396 ms
(6 rows)

set enable_partition_pruning to off;

explain (costs off, analyze) execute p (1);
QUERY PLAN
────────────────────────────────────────────────────────────────
Append (actual time=0.108..0.135 rows=1 loops=1)
Subplans Removed: 1
-> Seq Scan on p2 (actual time=0.017..0.028 rows=1 loops=1)
Filter: (a = $1)
Planning Time: 0.042 ms
Execution Time: 0.399 ms
(6 rows)

Pruning still occurs, whereas one would expect it not to, because the plan
(the Append node) contains run-time pruning information, which was
initialized because enable_partition_pruning was turned on when the plan
was created.

Should we check its value during execution too, as done in the attached?

I think it's correct to check the plan time value, rather than the
execution time value. Other enable_* GUCs also take effect there, and I
don't see a problem with that?

Greetings,

Andres Freund

#3Amit Langote
amitlangote09@gmail.com
In reply to: Andres Freund (#2)
Re: cached plans and enable_partition_pruning

On Mon, Jul 23, 2018 at 11:20 PM, Andres Freund <andres@anarazel.de> wrote:

Hi,

On 2018-07-23 18:31:43 +0900, Amit Langote wrote:

It seems that because enable_partition_pruning's value is only checked
during planning, turning it off *after* a plan is created and cached does
not work as expected.

[ ... ]

Should we check its value during execution too, as done in the attached?

I think it's correct to check the plan time value, rather than the
execution time value. Other enable_* GUCs also take effect there, and I
don't see a problem with that?

Ah, so that may have been intentional. Although, I wonder if
enable_partition_pruning could be made to work differently than other
enable_* settings, because we *can* perform pruning which is an
optimization function even during execution, whereas we cannot modify
the plan in other cases?

Thanks,
Amit

#4Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Langote (#3)
Re: cached plans and enable_partition_pruning

On 2018-Jul-24, Amit Langote wrote:

On Mon, Jul 23, 2018 at 11:20 PM, Andres Freund <andres@anarazel.de> wrote:

I think it's correct to check the plan time value, rather than the
execution time value. Other enable_* GUCs also take effect there, and I
don't see a problem with that?

Ah, so that may have been intentional. Although, I wonder if
enable_partition_pruning could be made to work differently than other
enable_* settings, because we *can* perform pruning which is an
optimization function even during execution, whereas we cannot modify
the plan in other cases?

Well, let's discuss the use-case for doing that. We introduced the GUC
to cover for the case of bugs in the pruning code (and even then there
was people saying we should remove it.) Why would you have the GUC
turned on during planning but off during execution?

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#5Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#4)
Re: cached plans and enable_partition_pruning

On 2018-07-23 12:03:32 -0400, Alvaro Herrera wrote:

On 2018-Jul-24, Amit Langote wrote:

On Mon, Jul 23, 2018 at 11:20 PM, Andres Freund <andres@anarazel.de> wrote:

I think it's correct to check the plan time value, rather than the
execution time value. Other enable_* GUCs also take effect there, and I
don't see a problem with that?

Ah, so that may have been intentional. Although, I wonder if
enable_partition_pruning could be made to work differently than other
enable_* settings, because we *can* perform pruning which is an
optimization function even during execution, whereas we cannot modify
the plan in other cases?

Well, let's discuss the use-case for doing that. We introduced the GUC
to cover for the case of bugs in the pruning code (and even then there
was people saying we should remove it.) Why would you have the GUC
turned on during planning but off during execution?

I think it's even more than that: It'd not be consistent to take it into
account at execution time, and there'd have to be very convincing
reasons to behave differently.

Greetings,

Andres Freund