JIT performance bug/regression & JIT EXPLAIN
Hi,
Unfortunately I found a performance regression for JITed query
compilation introduced in 12, compared to 11. Fixed in one of the
attached patches (v1-0009-Fix-determination-when-tuple-deforming-can-be-JIT.patch
- which needs a better commit message).
The first question is when to push that fix. I'm inclined to just do so
now - as we still do JITed tuple deforming in most cases, as well as
doing so in 11 in the places this patch fixes, the risk of that seems
low. But I can also see an arguments for waiting after 12.0.
For me the bigger question is about how to make sure we can write tests
determining which parts of the querytree are JIT compiled and which are
not. There's the above bug, and I'm also hunting a regression introduced
somewhere during 11's lifetime, which suggests to me that we need better
coverage. I also want to add new JIT logic, making this even more
important.
The reason that 11 didn't have tests verifying that certain parts of the
plan tree are JIT compiled is that EXPLAIN doesn't currently show the
relevant information, and it's not that trivial to do so.
What I'd like to do is to add additional, presumably optional, output to
EXPLAIN showing additional information about expressions.
There's two major parts to doing so:
1) Find a way to represent the additional information attached to
expressions, and provide show_expression et al with the ExprState to
be able to do so. The additional information I think is necessary is
a) is expression jit compiled
b-d) is scan/outer/inner tuple deforming necessary, and if so, JIT
compiled.
We can't unconditionally JIT compile for tuple deforming, because
there's a number of cases where the source slot doesn't have
precisely the same tuple desc, and/or doesn't have the same type.
2) Expand EXPLAIN output to show expressions that currently aren't
shown. Performance-wise the ones most critical that aren't currently
visible, and that I know about, are:
- Agg's combined transition function, we also currently don't display
in any understandable way how many passes over the input we do (for
grouping sets), nor how much memory is needed.
- Agg's hash comparator (separate regression referenced above)
- Hash/HashJoin's hashkeys/hjclauses
For 1) think we need to change show_expression()/show_qual() etc to also
pass down the corresponding ExprState if available (not available in
plenty of cases, most of which are not particularly important). That's
fairly mechanical.
Then we need to add information about JIT to individual expressions. In
the attached WIP patchset I've made that dependent on the new
"jit_details" EXPLAIN option. When specified, new per-expression
information is shown:
- JIT-Expr: whether the expression was JIT compiled (might e.g. not be
the case because no parent was provided)
- JIT-Deform-{Scan,Outer,Inner}: wether necessary, and whether JIT accelerated.
I don't like these names much, but ...
For the deform cases I chose to display
a) the function name if JIT compiled
b) "false" if the expression is JIT compiled, deforming is
necessary, but deforming is not JIT compiled (e.g. because the slot type
wasn't fixed)
c) "null" if not necessary, with that being omitted in text mode.
So e.g in json format this looks like:
"Filter": {
"Expr": "(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp without time zone)",
"JIT-Expr": "evalexpr_0_2",
"JIT-Deform-Scan": "deform_0_3",
"JIT-Deform-Outer": null,
"JIT-Deform-Inner": null
}
and in text mode:
Filter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp without time zone); JIT-Expr: evalexpr_0_2, JIT-Deform-Scan: deform_0_3
For now I chose to make Filter a group when both, not in text mode and
jit_details on - otherwise it's unclear what the JIT fields would apply
to. But that's pretty crappy, because it means that the 'shape' of the
output depends on the jit_details option. I think if we were starting
from scratch it'd make sense to alway have the Expression as it's own
sub-node, so interpreting code doesn't have to know all the places an
expression can be referenced from. But it's probably not too attractive
to change that today?
Somewhat independently the series also contains a patch that renames
verbose mode's "Output" to project if the node projects. I find it
pretty hard to interpret whether a node projects otherwise, and it's
confusing when jit_details shows details only for some node's Output,
but not for others. But the compat break due to that change is not small
- perhaps we could instead mark that in another way?
For 2) I've only started to improve the situation, but it's a pretty
number of pretty crucial pieces.
I first focussed adding information for Agg nodes, as a) those are
typically performance sensitive in cases where JIT is beneficial b) the
current instrumentation is really insufficient, especially in cases
where multiple grouping sets are computed at the same time - I think
it's effectilvey not interpretable.
In verbose mode explain now shows per-phase output about the transition
computation. E.g. for a grouping set query that can't be computed in one
pass, it now displays something like
MixedAggregate (cost=6083420.07..14022888.98 rows=10011685 width=64)
Project: avg((l_linenumber)::bigint), count((l_partkey)::bigint), sum(l_quantity), l_linenumber, l_partkey, l_quantity
Filter: (sum(lineitem.l_quantity) IS NOT NULL)
Phase 2 using strategy "Sort":
Sort Key: lineitem.l_partkey, lineitem.l_quantity
Transition Function: 2 * int8_avg_accum(TRANS, (l_linenumber)::bigint), 2 * int8inc_any(TRANS, (l_partkey)::bigint), 2 * float8pl(TRANS, l_quantity)
Sorted Group: lineitem.l_partkey, lineitem.l_quantity
Sorted Group: lineitem.l_partkey
Phase 1 using strategy "Sorted Input & All & Hash":
Transition Function: 6 * int8_avg_accum(TRANS, (l_linenumber)::bigint), 6 * int8inc_any(TRANS, (l_partkey)::bigint), 6 * float8pl(TRANS, l_quantity)
Sorted Input Group: lineitem.l_linenumber, lineitem.l_partkey, lineitem.l_quantity
Sorted Input Group: lineitem.l_linenumber, lineitem.l_partkey
Sorted Input Group: lineitem.l_linenumber
All Group
Hash Group: lineitem.l_quantity
Hash Group: lineitem.l_quantity, lineitem.l_linenumber
-> Sort (cost=6083420.07..6158418.50 rows=29999372 width=16)
...
The N * indicates how many of the same transition functions are computed
during that phase.
I'm not sure that 'TRANS' is the best placeholder for the transition
value here. Maybe $TRANS would be clearer?
For a parallel aggregate the upper level looks like:
Finalize HashAggregate (cost=610681.93..610682.02 rows=9 width=16)
Project: l_tax, sum(l_quantity)
Phase 0 using strategy "Hash":
Transition Function: float8pl(TRANS, (PARTIAL sum(l_quantity)))
Hash Group: lineitem.l_tax
-> Gather (cost=610677.11..610681.70 rows=45 width=16)
Output: l_tax, (PARTIAL sum(l_quantity))
Workers Planned: 5
-> Partial HashAggregate (cost=609677.11..609677.20 rows=9 width=16)
Project: l_tax, PARTIAL sum(l_quantity)
I've not done that yet, but I think it's way past time that we also add
memory usage information to Aggregate nodes (both for the hashtable(s),
and for internal sorts if those are performed for grouping sets). Which
would also be very hard in the "current" format, as there's no
representation of passes.
With jit_details enabled, we then can show information about the
aggregation function, and grouping functions:
Phase 0 using strategy "Hash":
Transition Function: float8pl(TRANS, (PARTIAL sum(l_quantity))); JIT-Expr: evalexpr_0_11, JIT-Deform-Outer: false
Hash Group: lineitem.l_tax; JIT-Expr: evalexpr_0_8, JIT-Deform-Outer: deform_0_10, JIT-Deform-Inner: deform_0_9
Currently the "new" format is used when either grouping sets are in use
(as the previous explain output was not particularly useful, and
information about the passes is important), or if VERBOSE or JIT_DETAILS
are specified.
For HashJoin/Hash I've added 'Outer Hash Key' and 'Hash Key' for each
key, but only in verbose mode. That's somewhat important because for
HashJoins those currently are often the performance critical bit,
because they'll commonly be the expressions that deform the slots from
below. That display is somewhat redundant with HashJoins "Hash Cond",
but they're evaluated separately. Under verbose that seems OK to me.
With jit_details enabled, this e.g. looks like this:
Hash Join (cost=271409.60..2326739.51 rows=30000584 width=250)
Project: lineitem.l_orderkey, lineitem.l_partkey, lineitem.l_suppkey, lineitem.l_linenumber, lineitem.l_quantity, lineitem.l_extendedprice, lineitem.l_discount, lineitem.l_tax,
Inner Unique: true
Hash Cond: ((lineitem.l_partkey = partsupp.ps_partkey) AND (lineitem.l_suppkey = partsupp.ps_suppkey)); JIT-Expr: evalexpr_0_7, JIT-Deform-Outer: deform_0_9, JIT-Deform-Inner:
Outer Hash Key: lineitem.l_partkey; JIT-Expr: evalexpr_0_10, JIT-Deform-Outer: deform_0_11
Outer Hash Key: lineitem.l_suppkey; JIT-Expr: evalexpr_0_12, JIT-Deform-Outer: deform_0_13
-> Seq Scan on public.lineitem (cost=0.00..819684.84 rows=30000584 width=106)
Output: lineitem.l_orderkey, lineitem.l_partkey, lineitem.l_suppkey, lineitem.l_linenumber, lineitem.l_quantity, lineitem.l_extendedprice, lineitem.l_discount, lineitem.l
-> Hash (cost=129384.24..129384.24 rows=3999824 width=144)
Output: partsupp.ps_partkey, partsupp.ps_suppkey, partsupp.ps_availqty, partsupp.ps_supplycost, partsupp.ps_comment
Hash Key: partsupp.ps_partkey; JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_1
Hash Key: partsupp.ps_suppkey; JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_3
-> Seq Scan on public.partsupp (cost=0.00..129384.24 rows=3999824 width=144)
Output: partsupp.ps_partkey, partsupp.ps_suppkey, partsupp.ps_availqty, partsupp.ps_supplycost, partsupp.ps_comment
JIT:
Functions: 14 (6 for expression evaluation, 8 for tuple deforming)
Options: Inlining true, Optimization true, Expressions true, Deforming true
this also highlights the sad fact that we currently use a separate
ExprState to compute each of the hash keys, and then "manually" invoke
the hash function itself. That's bad both for interpreted execution, as
we repeatedly pay executor startup overhead and don't even hit the
fastpath, as well as for JITed execution, because we have more code to
optimize (some of it pretty redundant, in particular the deforming). In
both cases we suffer from the problem that we deform the tuple
incrementally.
A later patch in the series then uses the new explain output to add some
tests for JIT, and then fixes two bugs, showing that the test output
changes.
Additionally I've also included a small improvement to the expression
evaluation logic, which also changes output in the JIT test, as it
should.
Comments?
Greetings,
Andres Freund
Attachments:
v1-0011-Reduce-code-duplication-for-ExecJust-Var-operatio.patchtext/x-diff; charset=us-asciiDownload
From 5a43ae5cf476f0b6422b3c60ce860c5cda060da8 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 11:14:46 -0700
Subject: [PATCH v1 11/12] Reduce code duplication for ExecJust*Var operations.
This is mainly in preparation for introducing a few additional
fastpath implementations.
Also reorder ExecJust*Var functions to be consistent with the order in
which they're used.
Author: Andres Freund
Discussion: https://postgr.es/m/CAE-ML+9OKSN71+mHtfMD-L24oDp8dGTfaVjDU6U+j+FNAW5kRQ@mail.gmail.com
---
src/backend/executor/execExprInterp.c | 94 ++++++++++-----------------
1 file changed, 35 insertions(+), 59 deletions(-)
diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c
index 293bfb61c36..e876160a0e7 100644
--- a/src/backend/executor/execExprInterp.c
+++ b/src/backend/executor/execExprInterp.c
@@ -155,11 +155,11 @@ static void ExecEvalRowNullInt(ExprState *state, ExprEvalStep *op,
static Datum ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustOuterVar(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustScanVar(ExprState *state, ExprContext *econtext, bool *isnull);
-static Datum ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustAssignInnerVar(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustAssignOuterVar(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustAssignScanVar(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull);
/*
@@ -1966,13 +1966,12 @@ ShutdownTupleDescRef(Datum arg)
* Fast-path functions, for very simple expressions
*/
-/* Simple reference to inner Var */
-static Datum
-ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull)
+/* implementation of ExecJust(Inner|Outer|Scan)Var */
+static pg_attribute_always_inline Datum
+ExecJustVarImpl(ExprState *state, TupleTableSlot *slot, bool *isnull)
{
ExprEvalStep *op = &state->steps[1];
int attnum = op->d.var.attnum + 1;
- TupleTableSlot *slot = econtext->ecxt_innertuple;
CheckOpSlotCompatibility(&state->steps[0], slot);
@@ -1984,52 +1983,34 @@ ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull)
return slot_getattr(slot, attnum, isnull);
}
-/* Simple reference to outer Var */
+/* Simple reference to inner Var */
static Datum
-ExecJustOuterVar(ExprState *state, ExprContext *econtext, bool *isnull)
+ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull)
{
- ExprEvalStep *op = &state->steps[1];
- int attnum = op->d.var.attnum + 1;
- TupleTableSlot *slot = econtext->ecxt_outertuple;
-
- CheckOpSlotCompatibility(&state->steps[0], slot);
-
- /* See comments in ExecJustInnerVar */
- return slot_getattr(slot, attnum, isnull);
+ return ExecJustVarImpl(state, econtext->ecxt_innertuple, isnull);
}
-/* Simple reference to scan Var */
+/* Simple reference to outer Var */
static Datum
-ExecJustScanVar(ExprState *state, ExprContext *econtext, bool *isnull)
+ExecJustOuterVar(ExprState *state, ExprContext *econtext, bool *isnull)
{
- ExprEvalStep *op = &state->steps[1];
- int attnum = op->d.var.attnum + 1;
- TupleTableSlot *slot = econtext->ecxt_scantuple;
-
- CheckOpSlotCompatibility(&state->steps[0], slot);
-
- /* See comments in ExecJustInnerVar */
- return slot_getattr(slot, attnum, isnull);
+ return ExecJustVarImpl(state, econtext->ecxt_outertuple, isnull);
}
-/* Simple Const expression */
+/* Simple reference to scan Var */
static Datum
-ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull)
+ExecJustScanVar(ExprState *state, ExprContext *econtext, bool *isnull)
{
- ExprEvalStep *op = &state->steps[0];
-
- *isnull = op->d.constval.isnull;
- return op->d.constval.value;
+ return ExecJustVarImpl(state, econtext->ecxt_scantuple, isnull);
}
-/* Evaluate inner Var and assign to appropriate column of result tuple */
-static Datum
-ExecJustAssignInnerVar(ExprState *state, ExprContext *econtext, bool *isnull)
+/* implementation of ExecJustAssign(Inner|Outer|Scan)Var */
+static pg_attribute_always_inline Datum
+ExecJustAssignVarImpl(ExprState *state, TupleTableSlot *inslot, bool *isnull)
{
ExprEvalStep *op = &state->steps[1];
int attnum = op->d.assign_var.attnum + 1;
int resultnum = op->d.assign_var.resultnum;
- TupleTableSlot *inslot = econtext->ecxt_innertuple;
TupleTableSlot *outslot = state->resultslot;
CheckOpSlotCompatibility(&state->steps[0], inslot);
@@ -2047,40 +2028,25 @@ ExecJustAssignInnerVar(ExprState *state, ExprContext *econtext, bool *isnull)
return 0;
}
+/* Evaluate inner Var and assign to appropriate column of result tuple */
+static Datum
+ExecJustAssignInnerVar(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustAssignVarImpl(state, econtext->ecxt_innertuple, isnull);
+}
+
/* Evaluate outer Var and assign to appropriate column of result tuple */
static Datum
ExecJustAssignOuterVar(ExprState *state, ExprContext *econtext, bool *isnull)
{
- ExprEvalStep *op = &state->steps[1];
- int attnum = op->d.assign_var.attnum + 1;
- int resultnum = op->d.assign_var.resultnum;
- TupleTableSlot *inslot = econtext->ecxt_outertuple;
- TupleTableSlot *outslot = state->resultslot;
-
- CheckOpSlotCompatibility(&state->steps[0], inslot);
-
- /* See comments in ExecJustAssignInnerVar */
- outslot->tts_values[resultnum] =
- slot_getattr(inslot, attnum, &outslot->tts_isnull[resultnum]);
- return 0;
+ return ExecJustAssignVarImpl(state, econtext->ecxt_outertuple, isnull);
}
/* Evaluate scan Var and assign to appropriate column of result tuple */
static Datum
ExecJustAssignScanVar(ExprState *state, ExprContext *econtext, bool *isnull)
{
- ExprEvalStep *op = &state->steps[1];
- int attnum = op->d.assign_var.attnum + 1;
- int resultnum = op->d.assign_var.resultnum;
- TupleTableSlot *inslot = econtext->ecxt_scantuple;
- TupleTableSlot *outslot = state->resultslot;
-
- CheckOpSlotCompatibility(&state->steps[0], inslot);
-
- /* See comments in ExecJustAssignInnerVar */
- outslot->tts_values[resultnum] =
- slot_getattr(inslot, attnum, &outslot->tts_isnull[resultnum]);
- return 0;
+ return ExecJustAssignVarImpl(state, econtext->ecxt_scantuple, isnull);
}
/* Evaluate CASE_TESTVAL and apply a strict function to it */
@@ -2120,6 +2086,16 @@ ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull)
return d;
}
+/* Simple Const expression */
+static Datum
+ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ ExprEvalStep *op = &state->steps[0];
+
+ *isnull = op->d.constval.isnull;
+ return op->d.constval.value;
+}
+
#if defined(EEO_USE_COMPUTED_GOTO)
/*
* Comparator used when building address->opcode lookup table for
--
2.23.0.162.gf1d4a28250
v1-0012-Don-t-generate-EEOP_-_FETCHSOME-operations-for-sl.patchtext/x-diff; charset=us-asciiDownload
From 95aca6bb90980272916ad58fb545ad11f773ecd5 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 11:23:43 -0700
Subject: [PATCH v1 12/12] Don't generate EEOP_*_FETCHSOME operations for slots
know to be virtual.
That avoids unnecessary work during both interpreted and JIT compiled
expression evaluation.
Author: Soumyadeep Chakraborty, Andres Freund
Discussion: https://postgr.es/m/CAE-ML+9OKSN71+mHtfMD-L24oDp8dGTfaVjDU6U+j+FNAW5kRQ@mail.gmail.com
---
src/backend/executor/execExpr.c | 42 +++++---
src/backend/executor/execExprInterp.c | 133 +++++++++++++++++++++++++-
src/backend/jit/llvm/llvmjit_expr.c | 6 +-
src/test/regress/expected/jit.out | 2 +-
4 files changed, 160 insertions(+), 23 deletions(-)
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index ecaa3ed98f9..e137be14979 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -65,7 +65,7 @@ static void ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args,
static void ExecInitExprSlots(ExprState *state, Node *node);
static void ExecPushExprSlots(ExprState *state, LastAttnumInfo *info);
static bool get_last_attnums_walker(Node *node, LastAttnumInfo *info);
-static void ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op);
+static bool ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op);
static void ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable,
ExprState *state);
static void ExecInitSubscriptingRef(ExprEvalStep *scratch,
@@ -2285,8 +2285,8 @@ ExecPushExprSlots(ExprState *state, LastAttnumInfo *info)
scratch.d.fetch.fixed = false;
scratch.d.fetch.kind = NULL;
scratch.d.fetch.known_desc = NULL;
- ExecComputeSlotInfo(state, &scratch);
- ExprEvalPushStep(state, &scratch);
+ if (ExecComputeSlotInfo(state, &scratch))
+ ExprEvalPushStep(state, &scratch);
}
if (info->last_outer > 0)
{
@@ -2295,8 +2295,8 @@ ExecPushExprSlots(ExprState *state, LastAttnumInfo *info)
scratch.d.fetch.fixed = false;
scratch.d.fetch.kind = NULL;
scratch.d.fetch.known_desc = NULL;
- ExecComputeSlotInfo(state, &scratch);
- ExprEvalPushStep(state, &scratch);
+ if (ExecComputeSlotInfo(state, &scratch))
+ ExprEvalPushStep(state, &scratch);
}
if (info->last_scan > 0)
{
@@ -2305,8 +2305,8 @@ ExecPushExprSlots(ExprState *state, LastAttnumInfo *info)
scratch.d.fetch.fixed = false;
scratch.d.fetch.kind = NULL;
scratch.d.fetch.known_desc = NULL;
- ExecComputeSlotInfo(state, &scratch);
- ExprEvalPushStep(state, &scratch);
+ if (ExecComputeSlotInfo(state, &scratch))
+ ExprEvalPushStep(state, &scratch);
}
}
@@ -2364,8 +2364,10 @@ get_last_attnums_walker(Node *node, LastAttnumInfo *info)
* The goal is to determine whether a slot is 'fixed', that is, every
* evaluation of the expression will have the same type of slot, with an
* equivalent descriptor.
+ *
+ * Returns true if the the deforming step is required, false otherwise.
*/
-static void
+static bool
ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
{
PlanState *parent = state->parent;
@@ -2374,6 +2376,10 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
bool isfixed = false;
ExprEvalOp opcode = op->opcode;
+ Assert(opcode == EEOP_INNER_FETCHSOME ||
+ opcode == EEOP_OUTER_FETCHSOME ||
+ opcode == EEOP_SCAN_FETCHSOME);
+
if (op->d.fetch.known_desc != NULL)
{
desc = op->d.fetch.known_desc;
@@ -2384,7 +2390,7 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
{
isfixed = false;
}
- else if (op->opcode == EEOP_INNER_FETCHSOME)
+ else if (opcode == EEOP_INNER_FETCHSOME)
{
PlanState *is = innerPlanState(parent);
@@ -2404,7 +2410,7 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
desc = ExecGetResultType(is);
}
}
- else if (op->opcode == EEOP_OUTER_FETCHSOME)
+ else if (opcode == EEOP_OUTER_FETCHSOME)
{
PlanState *os = outerPlanState(parent);
@@ -2424,7 +2430,7 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
desc = ExecGetResultType(os);
}
}
- else if (op->opcode == EEOP_SCAN_FETCHSOME)
+ else if (opcode == EEOP_SCAN_FETCHSOME)
{
desc = parent->scandesc;
@@ -2448,12 +2454,18 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
op->d.fetch.known_desc = NULL;
}
+ /* if the slot is known to always virtual we never need to deform */
+ if (op->d.fetch.fixed && op->d.fetch.kind == &TTSOpsVirtual)
+ return false;
+
if (opcode == EEOP_INNER_FETCHSOME)
state->flags |= EEO_FLAG_DEFORM_INNER;
else if (opcode == EEOP_OUTER_FETCHSOME)
state->flags |= EEO_FLAG_DEFORM_OUTER;
else if (opcode == EEOP_SCAN_FETCHSOME)
state->flags |= EEO_FLAG_DEFORM_SCAN;
+
+ return true;
}
/*
@@ -3367,16 +3379,16 @@ ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc,
scratch.d.fetch.fixed = false;
scratch.d.fetch.known_desc = ldesc;
scratch.d.fetch.kind = lops;
- ExecComputeSlotInfo(state, &scratch);
- ExprEvalPushStep(state, &scratch);
+ if (ExecComputeSlotInfo(state, &scratch))
+ ExprEvalPushStep(state, &scratch);
scratch.opcode = EEOP_OUTER_FETCHSOME;
scratch.d.fetch.last_var = maxatt;
scratch.d.fetch.fixed = false;
scratch.d.fetch.known_desc = rdesc;
scratch.d.fetch.kind = rops;
- ExecComputeSlotInfo(state, &scratch);
- ExprEvalPushStep(state, &scratch);
+ if (ExecComputeSlotInfo(state, &scratch))
+ ExprEvalPushStep(state, &scratch);
/*
* Start comparing at the last field (least significant sort key). That's
diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c
index e876160a0e7..ccea030cd70 100644
--- a/src/backend/executor/execExprInterp.c
+++ b/src/backend/executor/execExprInterp.c
@@ -160,6 +160,12 @@ static Datum ExecJustAssignOuterVar(ExprState *state, ExprContext *econtext, boo
static Datum ExecJustAssignScanVar(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull);
static Datum ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustInnerVarVirt(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustOuterVarVirt(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustScanVarVirt(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustAssignInnerVarVirt(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustAssignOuterVarVirt(ExprState *state, ExprContext *econtext, bool *isnull);
+static Datum ExecJustAssignScanVarVirt(ExprState *state, ExprContext *econtext, bool *isnull);
/*
@@ -255,11 +261,45 @@ ExecReadyInterpretedExpr(ExprState *state)
return;
}
}
- else if (state->steps_len == 2 &&
- state->steps[0].opcode == EEOP_CONST)
+ else if (state->steps_len == 2)
{
- state->evalfunc_private = (void *) ExecJustConst;
- return;
+ ExprEvalOp step0 = state->steps[0].opcode;
+
+ if (step0 == EEOP_CONST)
+ {
+ state->evalfunc_private = (void *) ExecJustConst;
+ return;
+ }
+ else if (step0 == EEOP_INNER_VAR)
+ {
+ state->evalfunc_private = (void *) ExecJustInnerVarVirt;
+ return;
+ }
+ else if (step0 == EEOP_OUTER_VAR)
+ {
+ state->evalfunc_private = (void *) ExecJustOuterVarVirt;
+ return;
+ }
+ else if (step0 == EEOP_SCAN_VAR)
+ {
+ state->evalfunc_private = (void *) ExecJustScanVarVirt;
+ return;
+ }
+ else if (step0 == EEOP_ASSIGN_INNER_VAR)
+ {
+ state->evalfunc_private = (void *) ExecJustAssignInnerVarVirt;
+ return;
+ }
+ else if (step0 == EEOP_ASSIGN_OUTER_VAR)
+ {
+ state->evalfunc_private = (void *) ExecJustAssignOuterVarVirt;
+ return;
+ }
+ else if (step0 == EEOP_ASSIGN_SCAN_VAR)
+ {
+ state->evalfunc_private = (void *) ExecJustAssignScanVarVirt;
+ return;
+ }
}
#if defined(EEO_USE_COMPUTED_GOTO)
@@ -2096,6 +2136,91 @@ ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull)
return op->d.constval.value;
}
+/* implementation of ExecJust(Inner|Outer|Scan)VarVirt */
+static pg_attribute_always_inline Datum
+ExecJustVarVirtImpl(ExprState *state, TupleTableSlot *slot, bool *isnull)
+{
+ ExprEvalStep *op = &state->steps[0];
+ int attnum = op->d.var.attnum;
+
+ /*
+ * As it is guaranteed that a virtual slot is used, there never is a need
+ * to perform tuple deforming (nor would it be possible). Therefore
+ * execExpr.c has not emitted a EEOP_*_FETCHSOME step. Verify, as much as
+ * possible, that that determination was accurate.
+ */
+ Assert(slot->tts_ops == &TTSOpsVirtual);
+ Assert(TTS_FIXED(slot));
+ Assert(attnum >= 0 && attnum < slot->tts_nvalid);
+
+ *isnull = slot->tts_isnull[attnum];
+
+ return slot->tts_values[attnum];
+}
+
+/* Like ExecJustInnerVar, optimized for virtual slots */
+static Datum
+ExecJustInnerVarVirt(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustVarVirtImpl(state, econtext->ecxt_innertuple, isnull);
+}
+
+/* Like ExecJustOuterVar, optimized for virtual slots */
+static Datum
+ExecJustOuterVarVirt(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustVarVirtImpl(state, econtext->ecxt_outertuple, isnull);
+}
+
+/* Like ExecJustScanVar, optimized for virtual slots */
+static Datum
+ExecJustScanVarVirt(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustVarVirtImpl(state, econtext->ecxt_scantuple, isnull);
+}
+
+/* implementation of ExecJustAssign(Inner|Outer|Scan)VarVirt */
+static pg_attribute_always_inline Datum
+ExecJustAssignVarVirtImpl(ExprState *state, TupleTableSlot *inslot, bool *isnull)
+{
+ ExprEvalStep *op = &state->steps[0];
+ int attnum = op->d.assign_var.attnum;
+ int resultnum = op->d.assign_var.resultnum;
+ TupleTableSlot *outslot = state->resultslot;
+
+ /* see ExecJustVarVirtImpl for comments */
+
+ Assert(inslot->tts_ops == &TTSOpsVirtual);
+ Assert(TTS_FIXED(inslot));
+ Assert(attnum >= 0 && attnum < inslot->tts_nvalid);
+
+ outslot->tts_values[resultnum] = inslot->tts_values[attnum];
+ outslot->tts_isnull[resultnum] = inslot->tts_isnull[attnum];
+
+ return 0;
+}
+
+/* Like ExecJustAssignInnerVar, optimized for virtual slots */
+static Datum
+ExecJustAssignInnerVarVirt(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustAssignVarVirtImpl(state, econtext->ecxt_innertuple, isnull);
+}
+
+/* Like ExecJustAssignOuterVar, optimized for virtual slots */
+static Datum
+ExecJustAssignOuterVarVirt(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustAssignVarVirtImpl(state, econtext->ecxt_outertuple, isnull);
+}
+
+/* Like ExecJustAssignScanVar, optimized for virtual slots */
+static Datum
+ExecJustAssignScanVarVirt(ExprState *state, ExprContext *econtext, bool *isnull)
+{
+ return ExecJustAssignVarVirtImpl(state, econtext->ecxt_scantuple, isnull);
+}
+
#if defined(EEO_USE_COMPUTED_GOTO)
/*
* Comparator used when building address->opcode lookup table for
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index d1d07751698..be8d424c8d0 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -289,6 +289,9 @@ llvm_compile_expr(ExprState *state)
if (op->d.fetch.fixed)
tts_ops = op->d.fetch.kind;
+ /* step should not have been generated */
+ Assert(tts_ops != &TTSOpsVirtual);
+
if (opcode == EEOP_INNER_FETCHSOME)
v_slot = v_innerslot;
else if (opcode == EEOP_OUTER_FETCHSOME)
@@ -299,9 +302,6 @@ llvm_compile_expr(ExprState *state)
/*
* Check if all required attributes are available, or
* whether deforming is required.
- *
- * TODO: skip nvalid check if slot is fixed and known to
- * be a virtual slot.
*/
v_nvalid =
l_load_struct_gep(b, v_slot,
diff --git a/src/test/regress/expected/jit.out b/src/test/regress/expected/jit.out
index 151faaa2fde..e2e7483b60c 100644
--- a/src/test/regress/expected/jit.out
+++ b/src/test/regress/expected/jit.out
@@ -322,7 +322,7 @@ EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_s
Output: a.id, a.data
-> Hash
Output: b.data, b.id
- Hash Key: b.id; JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: false
+ Hash Key: b.id; JIT-Expr: evalexpr_0_2
-> Seq Scan on public.jittest_simple b
Project: b.data, b.id; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
JIT:
--
2.23.0.162.gf1d4a28250
v1-0006-WIP-explain-Show-per-phase-information-about-aggr.patchtext/x-diff; charset=us-asciiDownload
From 828d81c5619c407ef2bed5c6e05e5a23d65afc29 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 14:58:10 -0700
Subject: [PATCH v1 06/12] WIP: explain: Show per-phase information about
aggregates in verbose mode.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 520 +++++++++++++-----
src/backend/executor/execExpr.c | 7 +-
src/backend/executor/nodeAgg.c | 4 +-
src/include/executor/executor.h | 3 +-
src/include/executor/nodeAgg.h | 3 +
src/test/regress/expected/aggregates.out | 32 +-
src/test/regress/expected/groupingsets.out | 329 ++++++-----
src/test/regress/expected/inherit.out | 9 +-
src/test/regress/expected/join.out | 5 +-
src/test/regress/expected/limit.out | 6 +-
.../regress/expected/partition_aggregate.out | 102 ++--
src/test/regress/expected/select_distinct.out | 8 +-
src/test/regress/expected/select_parallel.out | 5 +-
src/test/regress/expected/subselect.out | 5 +-
14 files changed, 679 insertions(+), 359 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 02455865d9f..2f3bd8a459a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -84,14 +84,6 @@ static void show_sort_keys(SortState *sortstate, List *ancestors,
ExplainState *es);
static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
ExplainState *es);
-static void show_agg_keys(AggState *astate, List *ancestors,
- ExplainState *es);
-static void show_grouping_sets(PlanState *planstate, Agg *agg,
- List *ancestors, ExplainState *es);
-static void show_grouping_set_keys(PlanState *planstate,
- Agg *aggnode, Sort *sortnode,
- List *context, bool useprefix,
- List *ancestors, ExplainState *es);
static void show_group_keys(GroupState *gstate, List *ancestors,
ExplainState *es);
static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -103,6 +95,7 @@ static void show_sortorder_options(StringInfo buf, Node *sortexpr,
static void show_tablesample(TableSampleClause *tsc, PlanState *planstate,
List *ancestors, ExplainState *es);
static void show_sort_info(SortState *sortstate, ExplainState *es);
+static void show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es);
static void show_hash_info(HashState *hashstate, ExplainState *es);
static void show_tidbitmap_info(BitmapHeapScanState *planstate,
ExplainState *es);
@@ -1872,12 +1865,12 @@ ExplainNode(PlanState *planstate, List *ancestors,
planstate, es);
break;
case T_Agg:
- show_agg_keys(castNode(AggState, planstate), ancestors, es);
show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
+ show_agg_info((AggState *) planstate, ancestors, es);
break;
case T_Group:
show_group_keys(castNode(GroupState, planstate), ancestors, es);
@@ -2430,138 +2423,6 @@ show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
ancestors, es);
}
-/*
- * Show the grouping keys for an Agg node.
- */
-static void
-show_agg_keys(AggState *astate, List *ancestors,
- ExplainState *es)
-{
- Agg *plan = (Agg *) astate->ss.ps.plan;
-
- if (plan->numCols > 0 || plan->groupingSets)
- {
- /* The key columns refer to the tlist of the child plan */
- ancestors = lcons(astate, ancestors);
-
- if (plan->groupingSets)
- show_grouping_sets(outerPlanState(astate), plan, ancestors, es);
- else
- show_sort_group_keys(outerPlanState(astate), "Group Key",
- plan->numCols, plan->grpColIdx,
- NULL, NULL, NULL,
- ancestors, es);
-
- ancestors = list_delete_first(ancestors);
- }
-}
-
-static void
-show_grouping_sets(PlanState *planstate, Agg *agg,
- List *ancestors, ExplainState *es)
-{
- List *context;
- bool useprefix;
- ListCell *lc;
-
- /* Set up deparsing context */
- context = set_deparse_context_planstate(es->deparse_cxt,
- (Node *) planstate,
- ancestors);
- useprefix = (list_length(es->rtable) > 1 || es->verbose);
-
- ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
-
- show_grouping_set_keys(planstate, agg, NULL,
- context, useprefix, ancestors, es);
-
- foreach(lc, agg->chain)
- {
- Agg *aggnode = lfirst(lc);
- Sort *sortnode = (Sort *) aggnode->plan.lefttree;
-
- show_grouping_set_keys(planstate, aggnode, sortnode,
- context, useprefix, ancestors, es);
- }
-
- ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
-}
-
-static void
-show_grouping_set_keys(PlanState *planstate,
- Agg *aggnode, Sort *sortnode,
- List *context, bool useprefix,
- List *ancestors, ExplainState *es)
-{
- Plan *plan = planstate->plan;
- char *exprstr;
- ListCell *lc;
- List *gsets = aggnode->groupingSets;
- AttrNumber *keycols = aggnode->grpColIdx;
- const char *keyname;
- const char *keysetname;
-
- if (aggnode->aggstrategy == AGG_HASHED || aggnode->aggstrategy == AGG_MIXED)
- {
- keyname = "Hash Key";
- keysetname = "Hash Keys";
- }
- else
- {
- keyname = "Group Key";
- keysetname = "Group Keys";
- }
-
- ExplainOpenGroup("Grouping Set", NULL, true, es);
-
- if (sortnode)
- {
- show_sort_group_keys(planstate, "Sort Key",
- sortnode->numCols, sortnode->sortColIdx,
- sortnode->sortOperators, sortnode->collations,
- sortnode->nullsFirst,
- ancestors, es);
- if (es->format == EXPLAIN_FORMAT_TEXT)
- es->indent++;
- }
-
- ExplainOpenGroup(keysetname, keysetname, false, es);
-
- foreach(lc, gsets)
- {
- List *result = NIL;
- ListCell *lc2;
-
- foreach(lc2, (List *) lfirst(lc))
- {
- Index i = lfirst_int(lc2);
- AttrNumber keyresno = keycols[i];
- TargetEntry *target = get_tle_by_resno(plan->targetlist,
- keyresno);
-
- if (!target)
- elog(ERROR, "no tlist entry for key %d", keyresno);
- /* Deparse the expression, showing any top-level cast */
- exprstr = deparse_expression((Node *) target->expr, context,
- useprefix, true);
-
- result = lappend(result, exprstr);
- }
-
- if (!result && es->format == EXPLAIN_FORMAT_TEXT)
- ExplainPropertyText(keyname, "()", es);
- else
- ExplainPropertyListNested(keyname, result, es);
- }
-
- ExplainCloseGroup(keysetname, keysetname, false, es);
-
- if (sortnode && es->format == EXPLAIN_FORMAT_TEXT)
- es->indent--;
-
- ExplainCloseGroup("Grouping Set", NULL, true, es);
-}
-
/*
* Show the grouping keys for a Group node.
*/
@@ -2845,6 +2706,383 @@ show_sort_info(SortState *sortstate, ExplainState *es)
}
}
+/*
+ * Generate an expression like string describing the computations for a
+ * phase's transition / combiner function.
+ */
+static char *
+exprstr_for_agg_phase(AggState *aggstate, AggStatePerPhase perphase, List *ancestors, ExplainState *es)
+{
+ PlanState *planstate = &aggstate->ss.ps;
+ StringInfoData transbuf;
+ List *context;
+ bool useprefix;
+ bool isCombine = DO_AGGSPLIT_COMBINE(aggstate->aggsplit);
+ ListCell *lc;
+
+ initStringInfo(&transbuf);
+
+ /* Set up deparsing context */
+ context = set_deparse_context_planstate(es->deparse_cxt,
+ (Node *) planstate,
+ ancestors);
+ useprefix = list_length(es->rtable) > 1;
+
+ for (int transno = 0; transno < aggstate->numtrans; transno++)
+ {
+ AggStatePerTrans pertrans = &aggstate->pertrans[transno];
+ int count = 0;
+ bool first;
+
+ if (perphase->uses_sorting)
+ count += Max(perphase->numsets, 1);
+
+ if (perphase->uses_hashing)
+ count += aggstate->num_hashes;
+
+ if (transno != 0)
+ appendStringInfoString(&transbuf, ", ");
+
+ if (pertrans->aggref->aggfilter && !isCombine)
+ {
+ appendStringInfo(&transbuf, "FILTER (%s) && ",
+ deparse_expression((Node *) pertrans->aggref->aggfilter,
+ context, useprefix, false));
+ }
+
+ /*
+ * XXX: should we instead somehow encode this as separate elements in
+ * non-text mode?
+ */
+ /* simplify for text output */
+ if (count > 1 || es->format != EXPLAIN_FORMAT_TEXT)
+ appendStringInfo(&transbuf, "%d * ", count);
+
+ appendStringInfo(&transbuf, "%s(TRANS",
+ get_func_name(pertrans->transfn_oid));
+
+ if (isCombine && pertrans->deserialfn_oid)
+ {
+ first = true;
+ appendStringInfo(&transbuf, ", %s(",
+ get_func_name(pertrans->deserialfn_oid));
+ }
+ else
+ first = false;
+
+ foreach(lc, pertrans->aggref->args)
+ {
+ TargetEntry *tle = lfirst(lc);
+
+ if (!first)
+ appendStringInfoString(&transbuf, ", ");
+
+ first = false;
+ appendStringInfo(&transbuf, "%s",
+ deparse_expression((Node *) tle->expr,
+ context, useprefix, false));
+ }
+
+ if (isCombine && pertrans->deserialfn_oid)
+ appendStringInfoString(&transbuf, ")");
+ appendStringInfoString(&transbuf, ")");
+ }
+
+ return transbuf.data;
+}
+
+static void
+show_agg_group_info(AggState *aggstate, AttrNumber *keycols, int length,
+ ExprState *expr, const char *label,
+ List *context, List *ancestors, ExplainState *es)
+{
+ bool useprefix = (list_length(es->rtable) > 1 || es->verbose);
+ List *result = NIL;
+
+ for (int colno = 0; colno < length; colno++)
+ {
+ char *exprstr;
+ AttrNumber keyresno = keycols[colno];
+ TargetEntry *target = get_tle_by_resno(outerPlanState(aggstate)->plan->targetlist,
+ keyresno);
+
+ if (!target)
+ elog(ERROR, "no tlist entry for key %d", keyresno);
+ /* Deparse the expression, showing any top-level cast */
+ exprstr = deparse_expression((Node *) target->expr, context,
+ useprefix, true);
+
+ result = lappend(result, exprstr);
+ }
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ ListCell *lc;
+ bool first = true;
+
+ appendStringInfoSpaces(es->str, es->indent * 2);
+
+ if (result != NIL)
+ {
+ appendStringInfo(es->str, "%s: ", label);
+
+ foreach(lc, result)
+ {
+ if (!first)
+ appendStringInfoString(es->str, ", ");
+ appendStringInfoString(es->str, (const char *) lfirst(lc));
+ first = false;
+ }
+ }
+ else
+ appendStringInfo(es->str, "%s", label);
+
+ if (expr && es->jit_details)
+ {
+ appendStringInfoString(es->str, "; ");
+ show_jit_expr_details(expr, es);
+ }
+
+ appendStringInfoChar(es->str, '\n');
+ }
+ else
+ {
+ ExplainOpenGroup("Group", NULL, true, es);
+ ExplainPropertyText("Method", label, es);
+ ExplainPropertyList("Key", result, es);
+ ExplainCloseGroup("Group", NULL, true, es);
+ }
+
+}
+
+/*
+ * Show information about Agg ndoes.
+ */
+static void
+show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es)
+{
+ Agg *plan = (Agg *) aggstate->ss.ps.plan;
+
+ if (!plan->groupingSets &&
+ (!es->verbose && !es->jit_details && es->format == EXPLAIN_FORMAT_TEXT))
+ {
+ /* The key columns refer to the tlist of the child plan */
+ ancestors = lcons(aggstate, ancestors);
+ show_sort_group_keys(outerPlanState(aggstate), "Group Key",
+ plan->numCols, plan->grpColIdx,
+ NULL, NULL, NULL,
+ ancestors, es);
+ ancestors = list_delete_first(ancestors);
+
+ return;
+ }
+
+ ExplainOpenGroup("Phases", "Phases", false, es);
+
+ for (int phaseno = aggstate->numphases - 1; phaseno >= 0; phaseno--)
+ {
+ AggStatePerPhase perphase = &aggstate->phases[phaseno];
+ Sort *sortnode = perphase->sortnode;
+ char *exprstr;
+ bool has_zero_length = false;
+ List *context;
+ List *strategy = NIL;
+ char *plain_strategy;
+
+ if (!perphase->evaltrans)
+ continue;
+
+ for (int i = 0; i < perphase->numsets; i++)
+ {
+ if (perphase->gset_lengths[i] == 0)
+ has_zero_length = true;
+ }
+
+ switch (perphase->aggstrategy)
+ {
+ case AGG_PLAIN:
+ strategy = lappend(strategy, "All");
+
+ if (aggstate->aggstrategy == AGG_MIXED && phaseno == 1)
+ strategy = lappend(strategy, "Hash");
+ plain_strategy = "All Group";
+ break;
+ case AGG_SORTED:
+ if (!perphase->sortnode)
+ {
+ strategy = lappend(strategy, "Sorted Input");
+ plain_strategy = "Sorted Input Group";
+ }
+ else
+ {
+ strategy = lappend(strategy, "Sort");
+ plain_strategy = "Sort Group";
+ }
+
+ if (has_zero_length)
+ strategy = lappend(strategy, "All");
+
+ if (aggstate->aggstrategy == AGG_MIXED && phaseno == 1)
+ strategy = lappend(strategy, "Hash");
+
+ break;
+ case AGG_HASHED:
+ strategy = lappend(strategy, "Hash");
+ plain_strategy = "Hash Group";
+ break;
+ case AGG_MIXED:
+ if (has_zero_length)
+ strategy = lappend(strategy, "All");
+ strategy = lappend(strategy, "Hash");
+ plain_strategy = "???";
+ break;
+ }
+
+ exprstr = exprstr_for_agg_phase(aggstate, perphase, ancestors, es);
+
+ ExplainOpenGroup("Phase", NULL, true, es);
+
+ /* The key columns refer to the tlist of the child plan */
+ ancestors = lcons(aggstate, ancestors);
+ context = set_deparse_context_planstate(es->deparse_cxt,
+ (Node *) outerPlanState(aggstate),
+ ancestors);
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ ListCell *lc;
+ bool first = true;
+
+ /* output phase data */
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "Phase %d using strategy \"",
+ phaseno);
+
+ foreach(lc, strategy)
+ {
+ if (!first)
+ appendStringInfoString(es->str, " & ");
+ first = false;
+ appendStringInfoString(es->str, (const char *) lfirst(lc));
+ }
+ appendStringInfoString(es->str, "\":\n");
+ es->indent++;
+ }
+ else
+ {
+ ExplainPropertyInteger("Phase-Number", NULL, phaseno, es);
+ ExplainPropertyList("Strategy", strategy, es);
+ }
+
+ if (sortnode)
+ {
+ show_sort_group_keys(outerPlanState(aggstate), "Sort Key",
+ sortnode->numCols, sortnode->sortColIdx,
+ sortnode->sortOperators, sortnode->collations,
+ sortnode->nullsFirst,
+ ancestors, es);
+ }
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ if (aggstate->numtrans > 0)
+ {
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "Transition Function: %s",
+ exprstr);
+ if (es->jit_details)
+ {
+ appendStringInfoString(es->str, "; ");
+ show_jit_expr_details(perphase->evaltrans, es);
+ }
+ appendStringInfoString(es->str, "\n");
+ }
+ }
+ else
+ {
+ if (es->jit_details)
+ {
+ ExplainOpenGroup("Transition Function", "Transition Function", true, es);
+ ExplainPropertyText("Expr", exprstr, es);
+ if (es->jit_details && aggstate->numtrans > 0)
+ show_jit_expr_details(perphase->evaltrans, es);
+ ExplainCloseGroup("Transition Function", "Transition Function", true, es);
+ }
+ else
+ ExplainPropertyText("Transition Function", exprstr, es);
+ }
+
+ ExplainOpenGroup("Groups", "Groups", false, es);
+
+ /* output data about each group */
+
+ if (perphase->uses_sorting)
+ {
+ if (perphase->numsets == 0)
+ {
+ int length = perphase->aggnode->numCols;
+ ExprState *expr = NULL;
+
+ if (length > 0)
+ expr = perphase->eqfunctions[perphase->aggnode->numCols - 1];
+
+ show_agg_group_info(aggstate, perphase->aggnode->grpColIdx,
+ length, expr, plain_strategy, context,
+ ancestors, es);
+ }
+
+ for (int sortno = 0; sortno < perphase->numsets; sortno++)
+ {
+ int length = perphase->gset_lengths[sortno];
+ ExprState *expr = NULL;
+ char *sort_strat;
+
+ if (length == 0)
+ sort_strat = "All Group";
+ else if (sortnode)
+ {
+ sort_strat = "Sorted Group";
+ expr = perphase->eqfunctions[length - 1];
+ }
+ else
+ {
+ sort_strat = "Sorted Input Group";
+ expr = perphase->eqfunctions[length - 1];
+ }
+
+ show_agg_group_info(aggstate, perphase->aggnode->grpColIdx, length,
+ expr, sort_strat, context, ancestors, es);
+ }
+ }
+
+ if (perphase->uses_hashing)
+ {
+ for (int hashno = 0; hashno < aggstate->num_hashes; hashno++)
+ {
+ AggStatePerHash perhash = &aggstate->perhash[hashno];
+
+ show_agg_group_info(aggstate, perhash->hashGrpColIdxInput,
+ perhash->numCols,
+ perhash->hashtable->tab_eq_func,
+ "Hash Group", context, ancestors, es);
+ }
+
+ ancestors = list_delete_first(ancestors);
+ }
+
+ ExplainCloseGroup("Groups", "Groups", false, es);
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ es->indent--;
+
+ /* TODO: should really show memory usage here */
+
+ ExplainCloseGroup("Phase", NULL, true, es);
+ }
+
+ ExplainCloseGroup("Phases", "Phases", false, es);
+}
+
/*
* Show information on hash buckets/batches.
*/
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 2c792d59b58..512ab4029ef 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -2919,8 +2919,7 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest,
* transition for each of the concurrently computed grouping sets.
*/
ExprState *
-ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
- bool doSort, bool doHash)
+ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase)
{
ExprState *state = makeNode(ExprState);
PlanState *parent = &aggstate->ss.ps;
@@ -3146,7 +3145,7 @@ ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
* applicable.
*/
setoff = 0;
- if (doSort)
+ if (phase->uses_sorting)
{
int processGroupingSets = Max(phase->numsets, 1);
@@ -3158,7 +3157,7 @@ ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
}
}
- if (doHash)
+ if (phase->uses_hashing)
{
int numHashes = aggstate->num_hashes;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 58c376aeb74..d447009e002 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2904,8 +2904,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
else
Assert(false);
- phase->evaltrans = ExecBuildAggTrans(aggstate, phase, dosort, dohash);
+ phase->uses_hashing = dohash;
+ phase->uses_sorting = dosort;
+ phase->evaltrans = ExecBuildAggTrans(aggstate, phase);
}
return aggstate;
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 6298c7c8cad..6e2e7e14bac 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -249,8 +249,7 @@ extern ExprState *ExecInitExprWithParams(Expr *node, ParamListInfo ext_params);
extern ExprState *ExecInitQual(List *qual, PlanState *parent);
extern ExprState *ExecInitCheck(List *qual, PlanState *parent);
extern List *ExecInitExprList(List *nodes, PlanState *parent);
-extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase,
- bool doSort, bool doHash);
+extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase);
extern ExprState *ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc,
const TupleTableSlotOps *lops, const TupleTableSlotOps *rops,
int numCols,
diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h
index 68c9e5f5400..4f3e1377cdf 100644
--- a/src/include/executor/nodeAgg.h
+++ b/src/include/executor/nodeAgg.h
@@ -280,6 +280,9 @@ typedef struct AggStatePerPhaseData
Sort *sortnode; /* Sort node for input ordering for phase */
ExprState *evaltrans; /* evaluation of transition functions */
+
+ bool uses_hashing; /* phase uses hashing */
+ bool uses_sorting; /* phase uses sorting */
} AggStatePerPhaseData;
/*
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index 683bcaedf5f..b3732b68d77 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -504,8 +504,8 @@ from generate_series(1, 3) s1,
lateral (select s2, sum(s1 + s2) sm
from generate_series(1, 3) s2 group by s2) ss
order by 1, 2;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------
Sort
Output: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
Sort Key: s1.s1, s2.s2
@@ -516,11 +516,13 @@ order by 1, 2;
Function Call: generate_series(1, 3)
-> HashAggregate
Project: s2.s2, sum((s1.s1 + s2.s2))
- Group Key: s2.s2
+ Phase 0 using strategy "Hash":
+ Transition Function: int4_sum(TRANS, (s1.s1 + s2.s2))
+ Hash Group: s2.s2
-> Function Scan on pg_catalog.generate_series s2
Output: s2.s2
Function Call: generate_series(1, 3)
-(14 rows)
+(16 rows)
select s1, s2, sm
from generate_series(1, 3) s1,
@@ -544,8 +546,8 @@ explain (verbose, costs off)
select array(select sum(x+y) s
from generate_series(1,3) y group by y order by s)
from generate_series(1,3) x;
- QUERY PLAN
--------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
Function Scan on pg_catalog.generate_series x
Project: (SubPlan 1)
Function Call: generate_series(1, 3)
@@ -555,11 +557,13 @@ select array(select sum(x+y) s
Sort Key: (sum((x.x + y.y)))
-> HashAggregate
Project: sum((x.x + y.y)), y.y
- Group Key: y.y
+ Phase 0 using strategy "Hash":
+ Transition Function: int4_sum(TRANS, (x.x + y.y))
+ Hash Group: y.y
-> Function Scan on pg_catalog.generate_series y
Output: y.y
Function Call: generate_series(1, 3)
-(13 rows)
+(15 rows)
select array(select sum(x+y) s
from generate_series(1,3) y group by y order by s)
@@ -2250,18 +2254,24 @@ SET enable_indexonlyscan = off;
-- regr_count(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg
EXPLAIN (COSTS OFF, VERBOSE)
SELECT variance(unique1::int4), sum(unique1::int8), regr_count(unique1::float8, unique1::float8) FROM tenk1;
- QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate
Project: variance(unique1), sum((unique1)::bigint), regr_count((unique1)::double precision, (unique1)::double precision)
+ Phase 1 using strategy "All":
+ Transition Function: numeric_poly_combine(TRANS, numeric_poly_deserialize((PARTIAL variance(unique1)))), int8_avg_combine(TRANS, int8_avg_deserialize((PARTIAL sum((unique1)::bigint)))), int8pl(TRANS, (PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)))
+ All Group
-> Gather
Output: (PARTIAL variance(unique1)), (PARTIAL sum((unique1)::bigint)), (PARTIAL regr_count((unique1)::double precision, (unique1)::double precision))
Workers Planned: 4
-> Partial Aggregate
Project: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint), PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)
+ Phase 1 using strategy "All":
+ Transition Function: int4_accum(TRANS, unique1), int8_avg_accum(TRANS, (unique1)::bigint), int8inc_float8_float8(TRANS, (unique1)::double precision, (unique1)::double precision)
+ All Group
-> Parallel Seq Scan on public.tenk1
Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
-(9 rows)
+(15 rows)
SELECT variance(unique1::int4), sum(unique1::int8), regr_count(unique1::float8, unique1::float8) FROM tenk1;
variance | sum | regr_count
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index c1f802c88a7..7bb052c568b 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -369,12 +369,13 @@ select g as alias1, g as alias2
QUERY PLAN
------------------------------------------------
GroupAggregate
- Group Key: g, g
- Group Key: g
+ Phase 1 using strategy "Sorted Input":
+ Sorted Input Group: g, g
+ Sorted Input Group: g
-> Sort
Sort Key: g
-> Function Scan on generate_series g
-(6 rows)
+(7 rows)
select g as alias1, g as alias2
from generate_series(1,3) g
@@ -640,15 +641,16 @@ select a, b, sum(v.x)
-- Test reordering of grouping sets
explain (costs off)
select * from gstest1 group by grouping sets((a,b,v),(v)) order by v,b,a;
- QUERY PLAN
-------------------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------------------
GroupAggregate
- Group Key: "*VALUES*".column3, "*VALUES*".column2, "*VALUES*".column1
- Group Key: "*VALUES*".column3
+ Phase 1 using strategy "Sorted Input":
+ Sorted Input Group: "*VALUES*".column3, "*VALUES*".column2, "*VALUES*".column1
+ Sorted Input Group: "*VALUES*".column3
-> Sort
Sort Key: "*VALUES*".column3, "*VALUES*".column2, "*VALUES*".column1
-> Values Scan on "*VALUES*"
-(6 rows)
+(7 rows)
-- Agg level check. This query should error out.
select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
@@ -720,16 +722,18 @@ select a,count(*) from gstest2 group by rollup(a) having a is distinct from 1 or
explain (costs off)
select a,count(*) from gstest2 group by rollup(a) having a is distinct from 1 order by a;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+------------------------------------------------
GroupAggregate
- Group Key: a
- Group Key: ()
Filter: (a IS DISTINCT FROM 1)
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 2 * int8inc(TRANS)
+ Sorted Input Group: a
+ All Group
-> Sort
Sort Key: a
-> Seq Scan on gstest2
-(7 rows)
+(9 rows)
select v.c, (select count(*) from gstest2 group by () having v.c)
from (values (false),(true)) v(c) order by v.c;
@@ -749,12 +753,14 @@ explain (costs off)
-> Values Scan on "*VALUES*"
SubPlan 1
-> Aggregate
- Group Key: ()
Filter: "*VALUES*".column1
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS)
+ All Group
-> Result
One-Time Filter: "*VALUES*".column1
-> Seq Scan on gstest2
-(10 rows)
+(12 rows)
-- HAVING with GROUPING queries
select ten, grouping(ten) from onek
@@ -968,15 +974,17 @@ select a, b, grouping(a,b), sum(v), count(*), max(v)
explain (costs off) select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by grouping sets ((a),(b)) order by 3,1,2;
- QUERY PLAN
---------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------
Sort
Sort Key: (GROUPING("*VALUES*".column1, "*VALUES*".column2)), "*VALUES*".column1, "*VALUES*".column2
-> HashAggregate
- Hash Key: "*VALUES*".column1
- Hash Key: "*VALUES*".column2
+ Phase 0 using strategy "Hash":
+ Transition Function: 2 * int4_sum(TRANS, "*VALUES*".column3), 2 * int8inc(TRANS), 2 * int4larger(TRANS, "*VALUES*".column3)
+ Hash Group: "*VALUES*".column1
+ Hash Group: "*VALUES*".column2
-> Values Scan on "*VALUES*"
-(6 rows)
+(8 rows)
select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by cube(a,b) order by 3,1,2;
@@ -1002,34 +1010,40 @@ select a, b, grouping(a,b), sum(v), count(*), max(v)
explain (costs off) select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by cube(a,b) order by 3,1,2;
- QUERY PLAN
---------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------
Sort
Sort Key: (GROUPING("*VALUES*".column1, "*VALUES*".column2)), "*VALUES*".column1, "*VALUES*".column2
-> MixedAggregate
- Hash Key: "*VALUES*".column1, "*VALUES*".column2
- Hash Key: "*VALUES*".column1
- Hash Key: "*VALUES*".column2
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, "*VALUES*".column3), 4 * int8inc(TRANS), 4 * int4larger(TRANS, "*VALUES*".column3)
+ All Group
+ Hash Group: "*VALUES*".column1, "*VALUES*".column2
+ Hash Group: "*VALUES*".column1
+ Hash Group: "*VALUES*".column2
-> Values Scan on "*VALUES*"
-(8 rows)
+(10 rows)
-- shouldn't try and hash
explain (costs off)
select a, b, grouping(a,b), array_agg(v order by v)
from gstest1 group by cube(a,b);
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
GroupAggregate
- Group Key: "*VALUES*".column1, "*VALUES*".column2
- Group Key: "*VALUES*".column1
- Group Key: ()
- Sort Key: "*VALUES*".column2
- Group Key: "*VALUES*".column2
+ Phase 2 using strategy "Sort":
+ Sort Key: "*VALUES*".column2
+ Transition Function: array_agg_transfn(TRANS, "*VALUES*".column3)
+ Sorted Group: "*VALUES*".column2
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 3 * array_agg_transfn(TRANS, "*VALUES*".column3)
+ Sorted Input Group: "*VALUES*".column1, "*VALUES*".column2
+ Sorted Input Group: "*VALUES*".column1
+ All Group
-> Sort
Sort Key: "*VALUES*".column1, "*VALUES*".column2
-> Values Scan on "*VALUES*"
-(9 rows)
+(13 rows)
-- unsortable cases
select unsortable_col, count(*)
@@ -1065,17 +1079,19 @@ explain (costs off)
count(*), sum(v)
from gstest4 group by grouping sets ((unhashable_col),(unsortable_col))
order by 3,5;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Sort
Sort Key: (GROUPING(unhashable_col, unsortable_col)), (sum(v))
-> MixedAggregate
- Hash Key: unsortable_col
- Group Key: unhashable_col
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4_sum(TRANS, v)
+ Sorted Input Group: unhashable_col
+ Hash Group: unsortable_col
-> Sort
Sort Key: unhashable_col
-> Seq Scan on gstest4
-(8 rows)
+(10 rows)
select unhashable_col, unsortable_col,
grouping(unhashable_col, unsortable_col),
@@ -1108,17 +1124,19 @@ explain (costs off)
count(*), sum(v)
from gstest4 group by grouping sets ((v,unhashable_col),(v,unsortable_col))
order by 3,5;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Sort
Sort Key: (GROUPING(unhashable_col, unsortable_col)), (sum(v))
-> MixedAggregate
- Hash Key: v, unsortable_col
- Group Key: v, unhashable_col
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4_sum(TRANS, v)
+ Sorted Input Group: v, unhashable_col
+ Hash Group: v, unsortable_col
-> Sort
Sort Key: v, unhashable_col
-> Seq Scan on gstest4
-(8 rows)
+(10 rows)
-- empty input: first is 0 rows, second 1, third 3 etc.
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
@@ -1128,13 +1146,15 @@ select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a)
explain (costs off)
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
- QUERY PLAN
---------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
HashAggregate
- Hash Key: a, b
- Hash Key: a
+ Phase 0 using strategy "Hash":
+ Transition Function: 2 * int4_sum(TRANS, v), 2 * int8inc(TRANS)
+ Hash Group: a, b
+ Hash Group: a
-> Seq Scan on gstest_empty
-(4 rows)
+(6 rows)
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
a | b | sum | count
@@ -1152,15 +1172,17 @@ select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),()
explain (costs off)
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
- QUERY PLAN
---------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
MixedAggregate
- Hash Key: a, b
- Group Key: ()
- Group Key: ()
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, v), 4 * int8inc(TRANS)
+ All Group
+ All Group
+ All Group
+ Hash Group: a, b
-> Seq Scan on gstest_empty
-(6 rows)
+(8 rows)
select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
sum | count
@@ -1172,14 +1194,16 @@ select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
explain (costs off)
select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
- QUERY PLAN
---------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
Aggregate
- Group Key: ()
- Group Key: ()
- Group Key: ()
+ Phase 1 using strategy "All":
+ Transition Function: 3 * int4_sum(TRANS, v), 3 * int8inc(TRANS)
+ All Group
+ All Group
+ All Group
-> Seq Scan on gstest_empty
-(5 rows)
+(7 rows)
-- check that functionally dependent cols are not nulled
select a, d, grouping(a,b,c)
@@ -1197,13 +1221,14 @@ explain (costs off)
select a, d, grouping(a,b,c)
from gstest3
group by grouping sets ((a,b), (a,c));
- QUERY PLAN
----------------------------
+ QUERY PLAN
+----------------------------------
HashAggregate
- Hash Key: a, b
- Hash Key: a, c
+ Phase 0 using strategy "Hash":
+ Hash Group: a, b
+ Hash Group: a, c
-> Seq Scan on gstest3
-(4 rows)
+(5 rows)
-- simple rescan tests
select a, b, sum(v.x)
@@ -1224,17 +1249,19 @@ explain (costs off)
from (values (1),(2)) v(x), gstest_data(v.x)
group by grouping sets (a,b)
order by 3, 1, 2;
- QUERY PLAN
----------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------
Sort
Sort Key: (sum("*VALUES*".column1)), gstest_data.a, gstest_data.b
-> HashAggregate
- Hash Key: gstest_data.a
- Hash Key: gstest_data.b
+ Phase 0 using strategy "Hash":
+ Transition Function: 2 * int4_sum(TRANS, "*VALUES*".column1)
+ Hash Group: gstest_data.a
+ Hash Group: gstest_data.b
-> Nested Loop
-> Values Scan on "*VALUES*"
-> Function Scan on gstest_data
-(8 rows)
+(10 rows)
select *
from (values (1),(2)) v(x),
@@ -1280,16 +1307,18 @@ select a, b, grouping(a,b), sum(v), count(*), max(v)
explain (costs off)
select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2)) order by 3,6;
- QUERY PLAN
--------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------
Sort
Sort Key: (GROUPING("*VALUES*".column1, "*VALUES*".column2)), (max("*VALUES*".column3))
-> HashAggregate
- Hash Key: "*VALUES*".column1, "*VALUES*".column2
- Hash Key: ("*VALUES*".column1 + 1), ("*VALUES*".column2 + 1)
- Hash Key: ("*VALUES*".column1 + 2), ("*VALUES*".column2 + 2)
+ Phase 0 using strategy "Hash":
+ Transition Function: 3 * int4_sum(TRANS, "*VALUES*".column3), 3 * int8inc(TRANS), 3 * int4larger(TRANS, "*VALUES*".column3)
+ Hash Group: "*VALUES*".column1, "*VALUES*".column2
+ Hash Group: ("*VALUES*".column1 + 1), ("*VALUES*".column2 + 1)
+ Hash Group: ("*VALUES*".column1 + 2), ("*VALUES*".column2 + 2)
-> Values Scan on "*VALUES*"
-(7 rows)
+(9 rows)
select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
from gstest2 group by cube (a,b) order by rsum, a, b;
@@ -1308,20 +1337,22 @@ select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
explain (costs off)
select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
from gstest2 group by cube (a,b) order by rsum, a, b;
- QUERY PLAN
----------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Sort
Sort Key: (sum((sum(c))) OVER (?)), a, b
-> WindowAgg
-> Sort
Sort Key: a, b
-> MixedAggregate
- Hash Key: a, b
- Hash Key: a
- Hash Key: b
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, c)
+ All Group
+ Hash Group: a, b
+ Hash Group: a
+ Hash Group: b
-> Seq Scan on gstest2
-(11 rows)
+(13 rows)
select a, b, sum(v.x)
from (values (1),(2)) v(x), gstest_data(v.x)
@@ -1346,19 +1377,21 @@ explain (costs off)
select a, b, sum(v.x)
from (values (1),(2)) v(x), gstest_data(v.x)
group by cube (a,b) order by a,b;
- QUERY PLAN
-------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------
Sort
Sort Key: gstest_data.a, gstest_data.b
-> MixedAggregate
- Hash Key: gstest_data.a, gstest_data.b
- Hash Key: gstest_data.a
- Hash Key: gstest_data.b
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, "*VALUES*".column1)
+ All Group
+ Hash Group: gstest_data.a, gstest_data.b
+ Hash Group: gstest_data.a
+ Hash Group: gstest_data.b
-> Nested Loop
-> Values Scan on "*VALUES*"
-> Function Scan on gstest_data
-(10 rows)
+(12 rows)
-- Verify that we correctly handle the child node returning a
-- non-minimal slot, which happens if the input is pre-sorted,
@@ -1366,19 +1399,23 @@ explain (costs off)
BEGIN;
SET LOCAL enable_hashagg = false;
EXPLAIN (COSTS OFF) SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
- QUERY PLAN
----------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------
Sort
Sort Key: a, b
-> GroupAggregate
- Group Key: a
- Group Key: ()
- Sort Key: b
- Group Key: b
+ Phase 2 using strategy "Sort":
+ Sort Key: b
+ Transition Function: int8inc(TRANS), int4larger(TRANS, a), int4larger(TRANS, b)
+ Sorted Group: b
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4larger(TRANS, a), 2 * int4larger(TRANS, b)
+ Sorted Input Group: a
+ All Group
-> Sort
Sort Key: a
-> Seq Scan on gstest3
-(10 rows)
+(14 rows)
SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
a | b | count | max | max
@@ -1392,17 +1429,21 @@ SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,(
SET LOCAL enable_seqscan = false;
EXPLAIN (COSTS OFF) SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
- QUERY PLAN
-------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------
Sort
Sort Key: a, b
-> GroupAggregate
- Group Key: a
- Group Key: ()
- Sort Key: b
- Group Key: b
+ Phase 2 using strategy "Sort":
+ Sort Key: b
+ Transition Function: int8inc(TRANS), int4larger(TRANS, a), int4larger(TRANS, b)
+ Sorted Group: b
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4larger(TRANS, a), 2 * int4larger(TRANS, b)
+ Sorted Input Group: a
+ All Group
-> Index Scan using gstest3_pkey on gstest3
-(8 rows)
+(12 rows)
SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
a | b | count | max | max
@@ -1549,22 +1590,28 @@ explain (costs off)
count(hundred), count(thousand), count(twothousand),
count(*)
from tenk1 group by grouping sets (unique1,twothousand,thousand,hundred,ten,four,two);
- QUERY PLAN
--------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MixedAggregate
- Hash Key: two
- Hash Key: four
- Hash Key: ten
- Hash Key: hundred
- Group Key: unique1
- Sort Key: twothousand
- Group Key: twothousand
- Sort Key: thousand
- Group Key: thousand
+ Phase 3 using strategy "Sort":
+ Sort Key: thousand
+ Transition Function: int8inc_any(TRANS, two), int8inc_any(TRANS, four), int8inc_any(TRANS, ten), int8inc_any(TRANS, hundred), int8inc_any(TRANS, thousand), int8inc_any(TRANS, twothousand), int8inc(TRANS)
+ Sorted Group: thousand
+ Phase 2 using strategy "Sort":
+ Sort Key: twothousand
+ Transition Function: int8inc_any(TRANS, two), int8inc_any(TRANS, four), int8inc_any(TRANS, ten), int8inc_any(TRANS, hundred), int8inc_any(TRANS, thousand), int8inc_any(TRANS, twothousand), int8inc(TRANS)
+ Sorted Group: twothousand
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 5 * int8inc_any(TRANS, two), 5 * int8inc_any(TRANS, four), 5 * int8inc_any(TRANS, ten), 5 * int8inc_any(TRANS, hundred), 5 * int8inc_any(TRANS, thousand), 5 * int8inc_any(TRANS, twothousand), 5 * int8inc(TRANS)
+ Sorted Input Group: unique1
+ Hash Group: two
+ Hash Group: four
+ Hash Group: ten
+ Hash Group: hundred
-> Sort
Sort Key: unique1
-> Seq Scan on tenk1
-(13 rows)
+(19 rows)
explain (costs off)
select unique1,
@@ -1572,18 +1619,20 @@ explain (costs off)
count(hundred), count(thousand), count(twothousand),
count(*)
from tenk1 group by grouping sets (unique1,hundred,ten,four,two);
- QUERY PLAN
--------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MixedAggregate
- Hash Key: two
- Hash Key: four
- Hash Key: ten
- Hash Key: hundred
- Group Key: unique1
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 5 * int8inc_any(TRANS, two), 5 * int8inc_any(TRANS, four), 5 * int8inc_any(TRANS, ten), 5 * int8inc_any(TRANS, hundred), 5 * int8inc_any(TRANS, thousand), 5 * int8inc_any(TRANS, twothousand), 5 * int8inc(TRANS)
+ Sorted Input Group: unique1
+ Hash Group: two
+ Hash Group: four
+ Hash Group: ten
+ Hash Group: hundred
-> Sort
Sort Key: unique1
-> Seq Scan on tenk1
-(9 rows)
+(11 rows)
set work_mem = '384kB';
explain (costs off)
@@ -1592,21 +1641,25 @@ explain (costs off)
count(hundred), count(thousand), count(twothousand),
count(*)
from tenk1 group by grouping sets (unique1,twothousand,thousand,hundred,ten,four,two);
- QUERY PLAN
--------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MixedAggregate
- Hash Key: two
- Hash Key: four
- Hash Key: ten
- Hash Key: hundred
- Hash Key: thousand
- Group Key: unique1
- Sort Key: twothousand
- Group Key: twothousand
+ Phase 2 using strategy "Sort":
+ Sort Key: twothousand
+ Transition Function: int8inc_any(TRANS, two), int8inc_any(TRANS, four), int8inc_any(TRANS, ten), int8inc_any(TRANS, hundred), int8inc_any(TRANS, thousand), int8inc_any(TRANS, twothousand), int8inc(TRANS)
+ Sorted Group: twothousand
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 6 * int8inc_any(TRANS, two), 6 * int8inc_any(TRANS, four), 6 * int8inc_any(TRANS, ten), 6 * int8inc_any(TRANS, hundred), 6 * int8inc_any(TRANS, thousand), 6 * int8inc_any(TRANS, twothousand), 6 * int8inc(TRANS)
+ Sorted Input Group: unique1
+ Hash Group: two
+ Hash Group: four
+ Hash Group: ten
+ Hash Group: hundred
+ Hash Group: thousand
-> Sort
Sort Key: unique1
-> Seq Scan on tenk1
-(12 rows)
+(16 rows)
-- check collation-sensitive matching between grouping expressions
-- (similar to a check for aggregates, but there are additional code
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 4b8351839a8..48d16bcee55 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1435,10 +1435,13 @@ select * from matest0 order by 1-id;
(6 rows)
explain (verbose, costs off) select min(1-id) from matest0;
- QUERY PLAN
-----------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------
Aggregate
Project: min((1 - matest0.id))
+ Phase 1 using strategy "All":
+ Transition Function: int4smaller(TRANS, (1 - matest0.id))
+ All Group
-> Append
-> Seq Scan on public.matest0
Project: matest0.id
@@ -1448,7 +1451,7 @@ explain (verbose, costs off) select min(1-id) from matest0;
Project: matest2.id
-> Seq Scan on public.matest3
Project: matest3.id
-(11 rows)
+(14 rows)
select min(1-id) from matest0;
min
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
index 7f319a79938..1ddc4423888 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -6172,7 +6172,8 @@ where exists (select 1 from tenk1 t3
Hash Cond: (t3.thousand = t1.unique1)
-> HashAggregate
Project: t3.thousand, t3.tenthous
- Group Key: t3.thousand, t3.tenthous
+ Phase 0 using strategy "Hash":
+ Hash Group: t3.thousand, t3.tenthous
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1 t3
Output: t3.thousand, t3.tenthous
-> Hash
@@ -6183,7 +6184,7 @@ where exists (select 1 from tenk1 t3
-> Index Only Scan using tenk1_hundred on public.tenk1 t2
Output: t2.hundred
Index Cond: (t2.hundred = t3.tenthous)
-(18 rows)
+(19 rows)
-- ... unless it actually is unique
create table j3 as select unique1, tenthous from onek;
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index 5b247e74b77..f9124feb866 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -489,10 +489,12 @@ select sum(tenthous) as s1, sum(tenthous) + random()*0 as s2
Output: (sum(tenthous)), (((sum(tenthous))::double precision + (random() * '0'::double precision))), thousand
-> GroupAggregate
Project: sum(tenthous), ((sum(tenthous))::double precision + (random() * '0'::double precision)), thousand
- Group Key: tenk1.thousand
+ Phase 1 using strategy "Sorted Input":
+ Transition Function: int4_sum(TRANS, tenthous)
+ Sorted Input Group: tenk1.thousand
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1
Output: thousand, tenthous
-(7 rows)
+(9 rows)
select sum(tenthous) as s1, sum(tenthous) + random()*0 as s2
from tenk1 group by thousand order by thousand limit 3;
diff --git a/src/test/regress/expected/partition_aggregate.out b/src/test/regress/expected/partition_aggregate.out
index 10349ec29c4..ca2c92a406a 100644
--- a/src/test/regress/expected/partition_aggregate.out
+++ b/src/test/regress/expected/partition_aggregate.out
@@ -26,16 +26,16 @@ SELECT c, sum(a), avg(b), count(*), min(a), max(b) FROM pagg_tab GROUP BY c HAVI
Sort Key: pagg_tab_p1.c, (sum(pagg_tab_p1.a)), (avg(pagg_tab_p1.b))
-> Append
-> HashAggregate
- Group Key: pagg_tab_p1.c
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.c
-> Seq Scan on pagg_tab_p1
-> HashAggregate
- Group Key: pagg_tab_p2.c
Filter: (avg(pagg_tab_p2.d) < '15'::numeric)
+ Group Key: pagg_tab_p2.c
-> Seq Scan on pagg_tab_p2
-> HashAggregate
- Group Key: pagg_tab_p3.c
Filter: (avg(pagg_tab_p3.d) < '15'::numeric)
+ Group Key: pagg_tab_p3.c
-> Seq Scan on pagg_tab_p3
(15 rows)
@@ -58,8 +58,8 @@ SELECT a, sum(b), avg(b), count(*), min(a), max(b) FROM pagg_tab GROUP BY a HAVI
Sort
Sort Key: pagg_tab_p1.a, (sum(pagg_tab_p1.b)), (avg(pagg_tab_p1.b))
-> Finalize HashAggregate
- Group Key: pagg_tab_p1.a
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.a
-> Append
-> Partial HashAggregate
Group Key: pagg_tab_p1.a
@@ -180,20 +180,20 @@ SELECT c, sum(a), avg(b), count(*) FROM pagg_tab GROUP BY 1 HAVING avg(d) < 15 O
Sort Key: pagg_tab_p1.c, (sum(pagg_tab_p1.a)), (avg(pagg_tab_p1.b))
-> Append
-> GroupAggregate
- Group Key: pagg_tab_p1.c
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.c
-> Sort
Sort Key: pagg_tab_p1.c
-> Seq Scan on pagg_tab_p1
-> GroupAggregate
- Group Key: pagg_tab_p2.c
Filter: (avg(pagg_tab_p2.d) < '15'::numeric)
+ Group Key: pagg_tab_p2.c
-> Sort
Sort Key: pagg_tab_p2.c
-> Seq Scan on pagg_tab_p2
-> GroupAggregate
- Group Key: pagg_tab_p3.c
Filter: (avg(pagg_tab_p3.d) < '15'::numeric)
+ Group Key: pagg_tab_p3.c
-> Sort
Sort Key: pagg_tab_p3.c
-> Seq Scan on pagg_tab_p3
@@ -218,8 +218,8 @@ SELECT a, sum(b), avg(b), count(*) FROM pagg_tab GROUP BY 1 HAVING avg(d) < 15 O
Sort
Sort Key: pagg_tab_p1.a, (sum(pagg_tab_p1.b)), (avg(pagg_tab_p1.b))
-> Finalize GroupAggregate
- Group Key: pagg_tab_p1.a
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.a
-> Merge Append
Sort Key: pagg_tab_p1.a
-> Partial GroupAggregate
@@ -335,18 +335,20 @@ RESET enable_hashagg;
-- ROLLUP, partitionwise aggregation does not apply
EXPLAIN (COSTS OFF)
SELECT c, sum(a) FROM pagg_tab GROUP BY rollup(c) ORDER BY 1, 2;
- QUERY PLAN
--------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Sort
Sort Key: pagg_tab_p1.c, (sum(pagg_tab_p1.a))
-> MixedAggregate
- Hash Key: pagg_tab_p1.c
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 2 * int4_sum(TRANS, pagg_tab_p1.a)
+ All Group
+ Hash Group: pagg_tab_p1.c
-> Append
-> Seq Scan on pagg_tab_p1
-> Seq Scan on pagg_tab_p2
-> Seq Scan on pagg_tab_p3
-(9 rows)
+(11 rows)
-- ORDERED SET within the aggregate.
-- Full aggregation; since all the rows that belong to the same group come
@@ -522,8 +524,8 @@ SELECT t1.y, sum(t1.x), count(*) FROM pagg_tab1 t1, pagg_tab2 t2 WHERE t1.x = t2
Sort
Sort Key: t1.y, (sum(t1.x)), (count(*))
-> Finalize GroupAggregate
- Group Key: t1.y
Filter: (avg(t1.x) > '10'::numeric)
+ Group Key: t1.y
-> Merge Append
Sort Key: t1.y
-> Partial GroupAggregate
@@ -830,8 +832,8 @@ SELECT a, sum(b), avg(c), count(*) FROM pagg_tab_m GROUP BY a HAVING avg(c) < 22
Sort
Sort Key: pagg_tab_m_p1.a, (sum(pagg_tab_m_p1.b)), (avg(pagg_tab_m_p1.c))
-> Finalize HashAggregate
- Group Key: pagg_tab_m_p1.a
Filter: (avg(pagg_tab_m_p1.c) < '22'::numeric)
+ Group Key: pagg_tab_m_p1.a
-> Append
-> Partial HashAggregate
Group Key: pagg_tab_m_p1.a
@@ -864,16 +866,16 @@ SELECT a, sum(b), avg(c), count(*) FROM pagg_tab_m GROUP BY a, (a+b)/2 HAVING su
Sort Key: pagg_tab_m_p1.a, (sum(pagg_tab_m_p1.b)), (avg(pagg_tab_m_p1.c))
-> Append
-> HashAggregate
- Group Key: pagg_tab_m_p1.a, ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2)
Filter: (sum(pagg_tab_m_p1.b) < 50)
+ Group Key: pagg_tab_m_p1.a, ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2)
-> Seq Scan on pagg_tab_m_p1
-> HashAggregate
- Group Key: pagg_tab_m_p2.a, ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2)
Filter: (sum(pagg_tab_m_p2.b) < 50)
+ Group Key: pagg_tab_m_p2.a, ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2)
-> Seq Scan on pagg_tab_m_p2
-> HashAggregate
- Group Key: pagg_tab_m_p3.a, ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2)
Filter: (sum(pagg_tab_m_p3.b) < 50)
+ Group Key: pagg_tab_m_p3.a, ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2)
-> Seq Scan on pagg_tab_m_p3
(15 rows)
@@ -897,16 +899,16 @@ SELECT a, c, sum(b), avg(c), count(*) FROM pagg_tab_m GROUP BY (a+b)/2, 2, 1 HAV
Sort Key: pagg_tab_m_p1.a, pagg_tab_m_p1.c, (sum(pagg_tab_m_p1.b))
-> Append
-> HashAggregate
- Group Key: ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2), pagg_tab_m_p1.c, pagg_tab_m_p1.a
Filter: ((sum(pagg_tab_m_p1.b) = 50) AND (avg(pagg_tab_m_p1.c) > '25'::numeric))
+ Group Key: ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2), pagg_tab_m_p1.c, pagg_tab_m_p1.a
-> Seq Scan on pagg_tab_m_p1
-> HashAggregate
- Group Key: ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2), pagg_tab_m_p2.c, pagg_tab_m_p2.a
Filter: ((sum(pagg_tab_m_p2.b) = 50) AND (avg(pagg_tab_m_p2.c) > '25'::numeric))
+ Group Key: ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2), pagg_tab_m_p2.c, pagg_tab_m_p2.a
-> Seq Scan on pagg_tab_m_p2
-> HashAggregate
- Group Key: ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2), pagg_tab_m_p3.c, pagg_tab_m_p3.a
Filter: ((sum(pagg_tab_m_p3.b) = 50) AND (avg(pagg_tab_m_p3.c) > '25'::numeric))
+ Group Key: ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2), pagg_tab_m_p3.c, pagg_tab_m_p3.a
-> Seq Scan on pagg_tab_m_p3
(15 rows)
@@ -951,24 +953,24 @@ SELECT a, sum(b), array_agg(distinct c), count(*) FROM pagg_tab_ml GROUP BY a HA
Workers Planned: 2
-> Parallel Append
-> GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Sort
Sort Key: pagg_tab_ml_p2_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p2_s1
-> Seq Scan on pagg_tab_ml_p2_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Sort
Sort Key: pagg_tab_ml_p3_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p3_s1
-> Seq Scan on pagg_tab_ml_p3_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Sort
Sort Key: pagg_tab_ml_p1.a
-> Seq Scan on pagg_tab_ml_p1
@@ -997,24 +999,24 @@ SELECT a, sum(b), array_agg(distinct c), count(*) FROM pagg_tab_ml GROUP BY a HA
Workers Planned: 2
-> Parallel Append
-> GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Sort
Sort Key: pagg_tab_ml_p2_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p2_s1
-> Seq Scan on pagg_tab_ml_p2_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Sort
Sort Key: pagg_tab_ml_p3_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p3_s1
-> Seq Scan on pagg_tab_ml_p3_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Sort
Sort Key: pagg_tab_ml_p1.a
-> Seq Scan on pagg_tab_ml_p1
@@ -1031,12 +1033,12 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Append
-> HashAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Seq Scan on pagg_tab_ml_p1
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Sort
Sort Key: pagg_tab_ml_p2_s1.a
-> Append
@@ -1047,8 +1049,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Group Key: pagg_tab_ml_p2_s2.a
-> Seq Scan on pagg_tab_ml_p2_s2
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Sort
Sort Key: pagg_tab_ml_p3_s1.a
-> Append
@@ -1123,24 +1125,24 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a, b, c HAVING avg(b) > 7 O
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Append
-> HashAggregate
- Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
Filter: (avg(pagg_tab_ml_p1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
-> Seq Scan on pagg_tab_ml_p1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
Filter: (avg(pagg_tab_ml_p2_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
-> Seq Scan on pagg_tab_ml_p2_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
Filter: (avg(pagg_tab_ml_p2_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
-> Seq Scan on pagg_tab_ml_p2_s2
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
Filter: (avg(pagg_tab_ml_p3_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
-> Seq Scan on pagg_tab_ml_p3_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
Filter: (avg(pagg_tab_ml_p3_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
-> Seq Scan on pagg_tab_ml_p3_s2
(23 rows)
@@ -1175,8 +1177,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Append
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1185,8 +1187,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Group Key: pagg_tab_ml_p1.a
-> Parallel Seq Scan on pagg_tab_ml_p1
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1199,8 +1201,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Group Key: pagg_tab_ml_p2_s2.a
-> Parallel Seq Scan on pagg_tab_ml_p2_s2
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1281,24 +1283,24 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a, b, c HAVING avg(b) > 7 O
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Parallel Append
-> HashAggregate
- Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
Filter: (avg(pagg_tab_ml_p1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
-> Seq Scan on pagg_tab_ml_p1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
Filter: (avg(pagg_tab_ml_p2_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
-> Seq Scan on pagg_tab_ml_p2_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
Filter: (avg(pagg_tab_ml_p2_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
-> Seq Scan on pagg_tab_ml_p2_s2
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
Filter: (avg(pagg_tab_ml_p3_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
-> Seq Scan on pagg_tab_ml_p3_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
Filter: (avg(pagg_tab_ml_p3_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
-> Seq Scan on pagg_tab_ml_p3_s2
(25 rows)
@@ -1342,8 +1344,8 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1379,8 +1381,8 @@ SELECT y, sum(x), avg(x), count(*) FROM pagg_tab_para GROUP BY y HAVING avg(x) <
Sort
Sort Key: pagg_tab_para_p1.y, (sum(pagg_tab_para_p1.x)), (avg(pagg_tab_para_p1.x))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.y
Filter: (avg(pagg_tab_para_p1.x) < '12'::numeric)
+ Group Key: pagg_tab_para_p1.y
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1417,8 +1419,8 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1451,8 +1453,8 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1487,16 +1489,16 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Append
-> HashAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Seq Scan on pagg_tab_para_p1
-> HashAggregate
- Group Key: pagg_tab_para_p2.x
Filter: (avg(pagg_tab_para_p2.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p2.x
-> Seq Scan on pagg_tab_para_p2
-> HashAggregate
- Group Key: pagg_tab_para_p3.x
Filter: (avg(pagg_tab_para_p3.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p3.x
-> Seq Scan on pagg_tab_para_p3
(15 rows)
diff --git a/src/test/regress/expected/select_distinct.out b/src/test/regress/expected/select_distinct.out
index fc93b33ee2b..e8e14292452 100644
--- a/src/test/regress/expected/select_distinct.out
+++ b/src/test/regress/expected/select_distinct.out
@@ -134,12 +134,16 @@ SELECT count(*) FROM
---------------------------------------------------------
Aggregate
Project: count(*)
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS)
+ All Group
-> HashAggregate
Project: tenk1.two, tenk1.four, tenk1.two
- Group Key: tenk1.two, tenk1.four, tenk1.two
+ Phase 0 using strategy "Hash":
+ Hash Group: tenk1.two, tenk1.four, tenk1.two
-> Seq Scan on public.tenk1
Project: tenk1.two, tenk1.four, tenk1.two
-(7 rows)
+(11 rows)
SELECT count(*) FROM
(SELECT DISTINCT two, four, two FROM tenk1) ss;
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index b5a7211fca0..1338857e5f6 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -980,6 +980,9 @@ explain (costs off, verbose)
----------------------------------------------------------------------------------------------
Aggregate
Project: count(*)
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS)
+ All Group
-> Hash Semi Join
Hash Cond: ((a.unique1 = b.unique1) AND (a.two = (row_number() OVER (?))))
-> Gather
@@ -996,7 +999,7 @@ explain (costs off, verbose)
Workers Planned: 4
-> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 b
Output: b.unique1
-(18 rows)
+(21 rows)
-- LIMIT/OFFSET within sub-selects can't be pushed to workers.
explain (costs off)
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index 90fe9fe9802..a51086a0254 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -979,10 +979,11 @@ select * from int4_tbl o where (f1, f1) in
Output: generate_series(1, 50), i.f1
-> HashAggregate
Project: i.f1
- Group Key: i.f1
+ Phase 0 using strategy "Hash":
+ Hash Group: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
-(19 rows)
+(20 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,50) / 10 g from int4_tbl i group by f1);
--
2.23.0.162.gf1d4a28250
v1-0007-WIP-explain-Output-hash-keys-in-verbose-mode.patchtext/x-diff; charset=us-asciiDownload
From 8fb91e4c830591ee1a663b49c4a263d916ebd390 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 15:10:58 -0700
Subject: [PATCH v1 07/12] WIP: explain: Output hash keys in verbose mode.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 29 ++++++-
src/test/regress/expected/join.out | 82 +++++++++++++++----
src/test/regress/expected/join_hash.out | 12 ++-
src/test/regress/expected/plpgsql.out | 2 +
src/test/regress/expected/select_parallel.out | 6 +-
5 files changed, 109 insertions(+), 22 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 2f3bd8a459a..1f613d31376 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -96,7 +96,7 @@ static void show_tablesample(TableSampleClause *tsc, PlanState *planstate,
List *ancestors, ExplainState *es);
static void show_sort_info(SortState *sortstate, ExplainState *es);
static void show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es);
-static void show_hash_info(HashState *hashstate, ExplainState *es);
+static void show_hash_info(HashState *hashstate, List *ancestors, ExplainState *es);
static void show_tidbitmap_info(BitmapHeapScanState *planstate,
ExplainState *es);
static void show_instrumentation_count(const char *qlabel, int which,
@@ -1863,6 +1863,17 @@ ExplainNode(PlanState *planstate, List *ancestors,
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
+ if (es->verbose)
+ {
+ ListCell *lc1, *lc2;
+
+ forboth(lc1, ((HashJoin *) plan)->hashkeys,
+ lc2, ((HashJoinState *) planstate)->hj_OuterHashKeys)
+ {
+ show_expression(lfirst(lc1), lfirst(lc2), "Outer Hash Key",
+ planstate, ancestors, true, es);
+ }
+ }
break;
case T_Agg:
show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
@@ -1903,7 +1914,7 @@ ExplainNode(PlanState *planstate, List *ancestors,
es);
break;
case T_Hash:
- show_hash_info(castNode(HashState, planstate), es);
+ show_hash_info(castNode(HashState, planstate), ancestors, es);
break;
default:
break;
@@ -3087,7 +3098,7 @@ show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es)
* Show information on hash buckets/batches.
*/
static void
-show_hash_info(HashState *hashstate, ExplainState *es)
+show_hash_info(HashState *hashstate, List *ancestors, ExplainState *es)
{
HashInstrumentation hinstrument = {0};
@@ -3184,6 +3195,18 @@ show_hash_info(HashState *hashstate, ExplainState *es)
spacePeakKb);
}
}
+
+ if (es->verbose)
+ {
+ ListCell *lc1, *lc2;
+
+ forboth(lc1, ((Hash *) hashstate->ps.plan)->hashkeys,
+ lc2, hashstate->hashkeys)
+ {
+ show_expression(lfirst(lc1), (ExprState *) lfirst(lc2), "Hash Key",
+ &hashstate->ps, ancestors, true, es);
+ }
+ }
}
/*
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
index 1ddc4423888..2ba48596622 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -3792,6 +3792,7 @@ select t1.* from
Hash Left Join
Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
+ Outer Hash Key: i8.q2
-> Nested Loop Left Join
Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
@@ -3802,24 +3803,29 @@ select t1.* from
-> Hash Right Join
Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
+ Outer Hash Key: (NULL::integer)
-> Hash Join
Project: i8.q2, (NULL::integer)
Hash Cond: (i8.q1 = i8b2.q1)
+ Outer Hash Key: i8.q1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b2.q1, (NULL::integer)
+ Hash Key: i8b2.q1
-> Seq Scan on public.int8_tbl i8b2
Project: i8b2.q1, NULL::integer
-> Hash
Output: i8b1.q2
+ Hash Key: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
Project: i8b1.q2
-> Hash
Output: i4.f1
+ Hash Key: i4.f1
-> Seq Scan on public.int4_tbl i4
Output: i4.f1
-(30 rows)
+(36 rows)
select t1.* from
text_tbl t1
@@ -3853,6 +3859,7 @@ select t1.* from
Hash Left Join
Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
+ Outer Hash Key: i8.q2
-> Nested Loop Left Join
Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
@@ -3863,9 +3870,11 @@ select t1.* from
-> Hash Right Join
Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
+ Outer Hash Key: (NULL::integer)
-> Hash Right Join
Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
+ Outer Hash Key: i8b2.q1
-> Nested Loop
Project: i8b2.q1, NULL::integer
-> Seq Scan on public.int8_tbl i8b2
@@ -3874,17 +3883,20 @@ select t1.* from
-> Seq Scan on public.int4_tbl i4b2
-> Hash
Output: i8.q1, i8.q2
+ Hash Key: i8.q1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b1.q2
+ Hash Key: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
Project: i8b1.q2
-> Hash
Output: i4.f1
+ Hash Key: i4.f1
-> Seq Scan on public.int4_tbl i4
Output: i4.f1
-(34 rows)
+(40 rows)
select t1.* from
text_tbl t1
@@ -3919,6 +3931,7 @@ select t1.* from
Hash Left Join
Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
+ Outer Hash Key: i8.q2
-> Nested Loop Left Join
Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
@@ -3929,31 +3942,38 @@ select t1.* from
-> Hash Right Join
Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
+ Outer Hash Key: (NULL::integer)
-> Hash Right Join
Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
+ Outer Hash Key: i8b2.q1
-> Hash Join
Project: i8b2.q1, NULL::integer
Hash Cond: (i8b2.q1 = i4b2.f1)
+ Outer Hash Key: i8b2.q1
-> Seq Scan on public.int8_tbl i8b2
Output: i8b2.q1, i8b2.q2
-> Hash
Output: i4b2.f1
+ Hash Key: i4b2.f1
-> Seq Scan on public.int4_tbl i4b2
Output: i4b2.f1
-> Hash
Output: i8.q1, i8.q2
+ Hash Key: i8.q1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b1.q2
+ Hash Key: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
Project: i8b1.q2
-> Hash
Output: i4.f1
+ Hash Key: i4.f1
-> Seq Scan on public.int4_tbl i4
Output: i4.f1
-(37 rows)
+(45 rows)
select t1.* from
text_tbl t1
@@ -4177,6 +4197,7 @@ where ss1.c2 = 0;
-> Hash Join
Project: i41.f1, i42.f1, i8.q1, i8.q2, i43.f1, 42
Hash Cond: (i41.f1 = i42.f1)
+ Outer Hash Key: i41.f1
-> Nested Loop
Project: i8.q1, i8.q2, i43.f1, i41.f1
-> Nested Loop
@@ -4191,13 +4212,14 @@ where ss1.c2 = 0;
Output: i41.f1
-> Hash
Output: i42.f1
+ Hash Key: i42.f1
-> Seq Scan on public.int4_tbl i42
Output: i42.f1
-> Limit
Output: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
-> Seq Scan on public.text_tbl
Project: i41.f1, i8.q1, i8.q2, i42.f1, i43.f1, (42)
-(25 rows)
+(27 rows)
select ss2.* from
int4_tbl i41
@@ -5259,13 +5281,15 @@ select * from int4_tbl i left join
Hash Left Join
Project: i.f1, j.f1
Hash Cond: (i.f1 = j.f1)
+ Outer Hash Key: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
-> Hash
Output: j.f1
+ Hash Key: j.f1
-> Seq Scan on public.int2_tbl j
Output: j.f1
-(9 rows)
+(11 rows)
select * from int4_tbl i left join
lateral (select * from int2_tbl j where i.f1 = j.f1) k on true;
@@ -5317,14 +5341,16 @@ select * from int4_tbl a,
-> Hash Left Join
Project: b.f1, c.q1, c.q2
Hash Cond: (b.f1 = c.q1)
+ Outer Hash Key: b.f1
-> Seq Scan on public.int4_tbl b
Output: b.f1
-> Hash
Output: c.q1, c.q2
+ Hash Key: c.q1
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
Filter: (a.f1 = c.q2)
-(14 rows)
+(16 rows)
select * from int4_tbl a,
lateral (
@@ -5449,26 +5475,30 @@ select * from
-> Hash Right Join
Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
+ Outer Hash Key: d.q1
-> Nested Loop
Project: a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
-> Hash Left Join
Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint))
Hash Cond: (a.q2 = b.q1)
+ Outer Hash Key: a.q2
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Hash
Output: b.q1, (COALESCE(b.q2, '42'::bigint))
+ Hash Key: b.q1
-> Seq Scan on public.int8_tbl b
Project: b.q1, COALESCE(b.q2, '42'::bigint)
-> Seq Scan on public.int8_tbl d
Project: d.q1, COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
+ Hash Key: c.q2
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
-> Result
Project: (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
-(24 rows)
+(28 rows)
-- case that breaks the old ph_may_need optimization
explain (verbose, costs off)
@@ -5490,11 +5520,13 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
-> Hash Right Join
Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
+ Outer Hash Key: d.q1
-> Nested Loop
Project: a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
-> Hash Right Join
Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, (b2.f1)::bigint))
Hash Cond: (b.q1 = a.q2)
+ Outer Hash Key: b.q1
-> Nested Loop
Project: b.q1, COALESCE(b.q2, (b2.f1)::bigint)
Join Filter: (b.q1 < b2.f1)
@@ -5506,19 +5538,21 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
Output: b2.f1
-> Hash
Output: a.q1, a.q2
+ Hash Key: a.q2
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl d
Project: d.q1, COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
+ Hash Key: c.q2
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
-> Materialize
Output: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
-(34 rows)
+(38 rows)
-- check processing of postponed quals (bug #9041)
explain (verbose, costs off)
@@ -5791,10 +5825,12 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
-> Hash Join
Project: t2.a, LEAST(t1.a, t2.a, t3.a)
Hash Cond: (t3.b = t2.a)
+ Outer Hash Key: t3.b
-> Seq Scan on public.join_ut1 t3
Output: t3.a, t3.b, t3.c
-> Hash
Output: t2.a
+ Hash Key: t2.a
-> Append
-> Seq Scan on public.join_pt1p1p1 t2
Project: t2.a
@@ -5802,7 +5838,7 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
-> Seq Scan on public.join_pt1p2 t2_1
Project: t2_1.a
Filter: (t1.a = t2_1.a)
-(21 rows)
+(23 rows)
select t1.b, ss.phv from join_ut1 t1 left join lateral
(select t2.a as t2a, t3.a t3a, least(t1.a, t2.a, t3.a) phv
@@ -5872,13 +5908,15 @@ select * from j1 inner join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- ensure join is not unique when not an equi-join
explain (verbose, costs off)
@@ -5905,13 +5943,15 @@ select * from j1 inner join j3 on j1.id = j3.id;
Project: j1.id, j3.id
Inner Unique: true
Hash Cond: (j3.id = j1.id)
+ Outer Hash Key: j3.id
-> Seq Scan on public.j3
Output: j3.id
-> Hash
Output: j1.id
+ Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-(10 rows)
+(12 rows)
-- ensure left join is marked as unique
explain (verbose, costs off)
@@ -5922,13 +5962,15 @@ select * from j1 left join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- ensure right join is marked as unique
explain (verbose, costs off)
@@ -5939,13 +5981,15 @@ select * from j1 right join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j2.id = j1.id)
+ Outer Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-> Hash
Output: j1.id
+ Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-(10 rows)
+(12 rows)
-- ensure full join is marked as unique
explain (verbose, costs off)
@@ -5956,13 +6000,15 @@ select * from j1 full join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- a clauseless (cross) join can't be unique
explain (verbose, costs off)
@@ -5988,13 +6034,15 @@ select * from j1 natural join j2;
Project: j1.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- ensure a distinct clause allows the inner to become unique
explain (verbose, costs off)
@@ -6170,6 +6218,7 @@ where exists (select 1 from tenk1 t3
-> Hash Join
Project: t1.unique1, t3.tenthous
Hash Cond: (t3.thousand = t1.unique1)
+ Outer Hash Key: t3.thousand
-> HashAggregate
Project: t3.thousand, t3.tenthous
Phase 0 using strategy "Hash":
@@ -6178,13 +6227,14 @@ where exists (select 1 from tenk1 t3
Output: t3.thousand, t3.tenthous
-> Hash
Output: t1.unique1
+ Hash Key: t1.unique1
-> Index Only Scan using onek_unique1 on public.onek t1
Output: t1.unique1
Index Cond: (t1.unique1 < 1)
-> Index Only Scan using tenk1_hundred on public.tenk1 t2
Output: t2.hundred
Index Cond: (t2.hundred = t3.tenthous)
-(19 rows)
+(21 rows)
-- ... unless it actually is unique
create table j3 as select unique1, tenthous from onek;
diff --git a/src/test/regress/expected/join_hash.out b/src/test/regress/expected/join_hash.out
index 4e405ebbd76..379b3b1566e 100644
--- a/src/test/regress/expected/join_hash.out
+++ b/src/test/regress/expected/join_hash.out
@@ -919,6 +919,8 @@ WHERE
Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: ((hjtest_1.id = (SubPlan 1)) AND ((SubPlan 2) = (SubPlan 3)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
+ Outer Hash Key: hjtest_1.id
+ Outer Hash Key: (SubPlan 2)
-> Seq Scan on public.hjtest_1
Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
@@ -927,6 +929,8 @@ WHERE
Project: (hjtest_1.b * 5)
-> Hash
Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
+ Hash Key: (SubPlan 1)
+ Hash Key: (SubPlan 3)
-> Seq Scan on public.hjtest_2
Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
@@ -943,7 +947,7 @@ WHERE
SubPlan 2
-> Result
Project: (hjtest_1.b * 5)
-(28 rows)
+(32 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
FROM hjtest_1, hjtest_2
@@ -973,6 +977,8 @@ WHERE
Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: (((SubPlan 1) = hjtest_1.id) AND ((SubPlan 3) = (SubPlan 2)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
+ Outer Hash Key: (SubPlan 1)
+ Outer Hash Key: (SubPlan 3)
-> Seq Scan on public.hjtest_2
Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
@@ -981,6 +987,8 @@ WHERE
Project: (hjtest_2.c * 5)
-> Hash
Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
+ Hash Key: hjtest_1.id
+ Hash Key: (SubPlan 2)
-> Seq Scan on public.hjtest_1
Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
@@ -997,7 +1005,7 @@ WHERE
SubPlan 3
-> Result
Project: (hjtest_2.c * 5)
-(28 rows)
+(32 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
FROM hjtest_2, hjtest_1
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
index 92421090755..9d06f8467b2 100644
--- a/src/test/regress/expected/plpgsql.out
+++ b/src/test/regress/expected/plpgsql.out
@@ -5209,10 +5209,12 @@ UPDATE transition_table_base
INFO: Hash Full Join
Project: COALESCE(ot.id, nt.id), ot.val, nt.val
Hash Cond: (ot.id = nt.id)
+ Outer Hash Key: ot.id
-> Named Tuplestore Scan
Output: ot.id, ot.val
-> Hash
Output: nt.id, nt.val
+ Hash Key: nt.id
-> Named Tuplestore Scan
Output: nt.id, nt.val
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 1338857e5f6..ce35a137ea5 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -985,6 +985,8 @@ explain (costs off, verbose)
All Group
-> Hash Semi Join
Hash Cond: ((a.unique1 = b.unique1) AND (a.two = (row_number() OVER (?))))
+ Outer Hash Key: a.unique1
+ Outer Hash Key: a.two
-> Gather
Output: a.unique1, a.two
Workers Planned: 4
@@ -992,6 +994,8 @@ explain (costs off, verbose)
Project: a.unique1, a.two
-> Hash
Output: b.unique1, (row_number() OVER (?))
+ Hash Key: b.unique1
+ Hash Key: (row_number() OVER (?))
-> WindowAgg
Project: b.unique1, row_number() OVER (?)
-> Gather
@@ -999,7 +1003,7 @@ explain (costs off, verbose)
Workers Planned: 4
-> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 b
Output: b.unique1
-(21 rows)
+(25 rows)
-- LIMIT/OFFSET within sub-selects can't be pushed to workers.
explain (costs off)
--
2.23.0.162.gf1d4a28250
v1-0008-jit-Add-tests.patchtext/x-diff; charset=us-asciiDownload
From 91b6df2f8c64436ac341d9a070f85a4b38416bf0 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 22:31:11 -0700
Subject: [PATCH v1 08/12] jit: Add tests.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/test/regress/expected/jit.out | 491 ++++++++++++++++++++++++++++
src/test/regress/expected/jit_0.out | 5 +
src/test/regress/parallel_schedule | 2 +-
src/test/regress/sql/jit.sql | 168 ++++++++++
4 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 src/test/regress/expected/jit.out
create mode 100644 src/test/regress/expected/jit_0.out
create mode 100644 src/test/regress/sql/jit.sql
diff --git a/src/test/regress/expected/jit.out b/src/test/regress/expected/jit.out
new file mode 100644
index 00000000000..64690415a4b
--- /dev/null
+++ b/src/test/regress/expected/jit.out
@@ -0,0 +1,491 @@
+/* skip test if JIT is not available */
+SELECT NOT (pg_jit_available() AND current_setting('jit')::bool)
+ AS skip_test \gset
+\if :skip_test
+\quit
+\endif
+-- start with a known baseline
+set jit_expressions = true;
+set jit_tuple_deforming = true;
+-- to reliably test, despite costs varying between platforms
+set jit_above_cost = 0;
+-- to make the bulk of the test cheaper
+set jit_optimize_above_cost = -1;
+set jit_inline_above_cost = -1;
+CREATE TABLE jittest_simple(id serial primary key, data text);
+INSERT INTO jittest_simple(data) VALUES('row1');
+INSERT INTO jittest_simple(data) VALUES('row2');
+-- verify that a simple relation-less query can be JITed
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+ QUERY PLAN
+---------------------------------------------------------------
+ Result
+ Project: (txid_current() = txid_current()); JIT-Expr: false
+(2 rows)
+
+SELECT txid_current() = txid_current();
+ ?column?
+----------
+ t
+(1 row)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Result
+ Project: (txid_current() = txid_current()); JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SELECT txid_current() = txid_current();
+ ?column?
+----------
+ t
+(1 row)
+
+-- that tuple deforming for a plain seqscan is JITed when projecting
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+ QUERY PLAN
+-----------------------------------
+ Seq Scan on public.jittest_simple
+ Project: data; JIT-Expr: false
+(2 rows)
+
+SELECT data FROM jittest_simple;
+ data
+------
+ row1
+ row2
+(2 rows)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: data; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 2 (1 for expression evaluation, 1 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SELECT data FROM jittest_simple;
+ data
+------
+ row1
+ row2
+(2 rows)
+
+-- unfortunately currently the physical tlist optimization may prevent
+-- JITed tuple deforming from taking effect
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+ QUERY PLAN
+-----------------------------------
+ Seq Scan on public.jittest_simple
+ Output: id, data
+(2 rows)
+
+SELECT * FROM jittest_simple;
+ id | data
+----+------
+ 1 | row1
+ 2 | row2
+(2 rows)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+ QUERY PLAN
+-----------------------------------
+ Seq Scan on public.jittest_simple
+ Output: id, data
+(2 rows)
+
+SELECT * FROM jittest_simple;
+ id | data
+----+------
+ 1 | row1
+ 2 | row2
+(2 rows)
+
+-- check that tuple deforming on wide tables works
+BEGIN;
+SET LOCAL jit_tuple_deforming = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+ QUERY PLAN
+----------------------------------------------------------------------------------
+ Seq Scan on public.extra_wide_table
+ Project: firstc, lastc; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: false
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming false
+(5 rows)
+
+SELECT firstc, lastc FROM extra_wide_table;
+ firstc | lastc
+-----------+----------
+ first col | last col
+(1 row)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.extra_wide_table
+ Project: firstc, lastc; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 2 (1 for expression evaluation, 1 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SELECT firstc, lastc FROM extra_wide_table;
+ firstc | lastc
+-----------+----------
+ first col | last col
+(1 row)
+
+-----
+-- test costing
+-----
+-- don't perform JIT compilation unless worthwhile
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: false
+(2 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- optimize once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization true, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- behave sanely if optimization cost is below general JIT costs
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 0;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: false
+(2 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization true, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining true, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining true, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- perform inlining and optimization once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining true, Optimization true, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- check that inner/outer tuple deforming can be inferred for upper nodes, join case
+BEGIN;
+SET LOCAL enable_hashjoin = true;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
+ Hash Join
+ Project: (a.data || b.data); JIT-Expr: evalexpr_0_3, JIT-Deform-Outer: deform_0_5, JIT-Deform-Inner: deform_0_4
+ Inner Unique: true
+ Hash Cond: (a.id = b.id); JIT-Expr: evalexpr_0_6, JIT-Deform-Outer: deform_0_8, JIT-Deform-Inner: deform_0_7
+ Outer Hash Key: a.id; JIT-Expr: evalexpr_0_9, JIT-Deform-Outer: deform_0_10
+ -> Seq Scan on public.jittest_simple a
+ Output: a.id, a.data
+ -> Hash
+ Output: b.data, b.id
+ Hash Key: b.id; JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: false
+ -> Seq Scan on public.jittest_simple b
+ Project: b.data, b.id; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 11 (5 for expression evaluation, 6 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(15 rows)
+
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ ?column?
+----------
+ row1row1
+ row2row2
+(2 rows)
+
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = true;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
+ Merge Join
+ Project: (a.data || b.data); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_2, JIT-Deform-Inner: deform_0_1
+ Inner Unique: true
+ Merge Cond: (a.id = b.id)
+ -> Index Scan using jittest_simple_pkey on public.jittest_simple a
+ Output: a.id, a.data
+ -> Index Scan using jittest_simple_pkey on public.jittest_simple b
+ Output: b.id, b.data
+ JIT:
+ Functions: 7 (3 for expression evaluation, 4 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(11 rows)
+
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ ?column?
+----------
+ row1row1
+ row2row2
+(2 rows)
+
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
+ Nested Loop
+ Project: (a.data || b.data); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_4, JIT-Deform-Inner: deform_0_3
+ Inner Unique: true
+ -> Seq Scan on public.jittest_simple a
+ Output: a.id, a.data
+ -> Index Scan using jittest_simple_pkey on public.jittest_simple b
+ Output: b.id, b.data
+ Index Cond: (b.id = a.id); JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 5 (2 for expression evaluation, 3 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(11 rows)
+
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ ?column?
+----------
+ row1row1
+ row2row2
+(2 rows)
+
+COMMIT;
+-- check that inner/outer tuple deforming can be inferred for upper nodes, agg case
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Aggregate
+ Project: count(*), count(data), string_agg(data, ':'::text); JIT-Expr: evalexpr_0_0
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS), int8inc_any(TRANS, data), string_agg_transfn(TRANS, data, ':'::text); JIT-Expr: evalexpr_0_1, JIT-Deform-Outer: false
+ All Group
+ -> Seq Scan on public.jittest_simple
+ Output: id, data
+ JIT:
+ Functions: 2 (2 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(10 rows)
+
+SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+ count | count | string_agg
+-------+-------+------------
+ 2 | 2 | row1:row2
+(1 row)
+
+-- Check that the equality hash-table function in a hash-aggregate can
+-- be accelerated.
+--
+-- XXX: Unfortunately this is currently broken
+BEGIN;
+SET LOCAL enable_hashagg = true;
+SET LOCAL enable_sort = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------------------------------
+ HashAggregate
+ Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: false
+ Phase 0 using strategy "Hash":
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_1, JIT-Deform-Outer: false
+ Hash Group: jittest_simple.data; JIT-Expr: false
+ -> Seq Scan on public.jittest_simple
+ Output: id, data
+ JIT:
+ Functions: 2 (2 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(10 rows)
+
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ data | string_agg
+------+------------
+ row1 | 1
+ row2 | 2
+(2 rows)
+
+END;
+-- Unfortunately for sort based aggregates, the group comparison
+-- function can current not be JITed
+BEGIN;
+SET LOCAL enable_hashagg = false;
+SET LOCAL enable_sort = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------------------------------
+ GroupAggregate
+ Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: false
+ Phase 1 using strategy "Sorted Input":
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_4, JIT-Deform-Outer: false
+ Sorted Input Group: jittest_simple.data; JIT-Expr: evalexpr_0_3, JIT-Deform-Outer: false, JIT-Deform-Inner: false
+ -> Sort
+ Output: data, id
+ Sort Key: jittest_simple.data
+ -> Seq Scan on public.jittest_simple
+ Project: data, id; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 5 (4 for expression evaluation, 1 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(13 rows)
+
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ data | string_agg
+------+------------
+ row1 | 1
+ row2 | 2
+(2 rows)
+
+END;
+-- check that EXPLAIN ANALYZE output is reproducible with the right options
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS, ANALYZE, TIMING OFF, SUMMARY OFF) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple (actual rows=2 loops=1)
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+DROP TABLE jittest_simple;
diff --git a/src/test/regress/expected/jit_0.out b/src/test/regress/expected/jit_0.out
new file mode 100644
index 00000000000..9812cb33752
--- /dev/null
+++ b/src/test/regress/expected/jit_0.out
@@ -0,0 +1,5 @@
+/* skip test if JIT is not available */
+SELECT NOT (pg_jit_available() AND current_setting('jit')::bool)
+ AS skip_test \gset
+\if :skip_test
+\quit
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index fc0f14122bb..c1c3dd3af8b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -78,7 +78,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview
# ----------
# Another group of parallel tests
# ----------
-test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8
+test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8 jit
# rules cannot run concurrently with any test that creates
# a view or rule in the public schema
diff --git a/src/test/regress/sql/jit.sql b/src/test/regress/sql/jit.sql
new file mode 100644
index 00000000000..f3b9a352cf1
--- /dev/null
+++ b/src/test/regress/sql/jit.sql
@@ -0,0 +1,168 @@
+/* skip test if JIT is not available */
+SELECT NOT (pg_jit_available() AND current_setting('jit')::bool)
+ AS skip_test \gset
+\if :skip_test
+\quit
+\endif
+
+-- start with a known baseline
+set jit_expressions = true;
+set jit_tuple_deforming = true;
+-- to reliably test, despite costs varying between platforms
+set jit_above_cost = 0;
+-- to make the bulk of the test cheaper
+set jit_optimize_above_cost = -1;
+set jit_inline_above_cost = -1;
+
+CREATE TABLE jittest_simple(id serial primary key, data text);
+INSERT INTO jittest_simple(data) VALUES('row1');
+INSERT INTO jittest_simple(data) VALUES('row2');
+
+-- verify that a simple relation-less query can be JITed
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+SELECT txid_current() = txid_current();
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+SELECT txid_current() = txid_current();
+
+
+-- that tuple deforming for a plain seqscan is JITed when projecting
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+SELECT data FROM jittest_simple;
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+SELECT data FROM jittest_simple;
+
+-- unfortunately currently the physical tlist optimization may prevent
+-- JITed tuple deforming from taking effect
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+SELECT * FROM jittest_simple;
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+SELECT * FROM jittest_simple;
+
+-- check that tuple deforming on wide tables works
+BEGIN;
+SET LOCAL jit_tuple_deforming = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+SELECT firstc, lastc FROM extra_wide_table;
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+SELECT firstc, lastc FROM extra_wide_table;
+
+-----
+-- test costing
+-----
+
+-- don't perform JIT compilation unless worthwhile
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- optimize once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- behave sanely if optimization cost is below general JIT costs
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 0;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+
+-- perform inlining and optimization once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- check that inner/outer tuple deforming can be inferred for upper nodes, join case
+BEGIN;
+SET LOCAL enable_hashjoin = true;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = true;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+COMMIT;
+
+-- check that inner/outer tuple deforming can be inferred for upper nodes, agg case
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+
+-- Check that the equality hash-table function in a hash-aggregate can
+-- be accelerated.
+--
+-- XXX: Unfortunately this is currently broken
+BEGIN;
+SET LOCAL enable_hashagg = true;
+SET LOCAL enable_sort = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+END;
+
+-- Unfortunately for sort based aggregates, the group comparison
+-- function can current not be JITed
+BEGIN;
+SET LOCAL enable_hashagg = false;
+SET LOCAL enable_sort = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+END;
+
+-- check that EXPLAIN ANALYZE output is reproducible with the right options
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS, ANALYZE, TIMING OFF, SUMMARY OFF) SELECT tableoid FROM jittest_simple;
+
+DROP TABLE jittest_simple;
--
2.23.0.162.gf1d4a28250
v1-0009-Fix-determination-when-tuple-deforming-can-be-JIT.patchtext/x-diff; charset=us-asciiDownload
From 9a502185f8b49088f95656b5c826b3b0258fb9b2 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sun, 22 Sep 2019 08:38:59 -0700
Subject: [PATCH v1 09/12] Fix determination when tuple deforming can be JITed.
This broke in 675af5c01e297, and can lead to tuple deforming in inner
nodes not being performed, where previously possible.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/executor/execExpr.c | 2 ++
src/test/regress/expected/jit.out | 30 +++++++++++++++---------------
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 512ab4029ef..ecaa3ed98f9 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -2396,6 +2396,7 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
{
isfixed = true;
tts_ops = parent->innerops;
+ desc = ExecGetResultType(is);
}
else if (is)
{
@@ -2415,6 +2416,7 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
{
isfixed = true;
tts_ops = parent->outerops;
+ desc = ExecGetResultType(os);
}
else if (os)
{
diff --git a/src/test/regress/expected/jit.out b/src/test/regress/expected/jit.out
index 64690415a4b..4db4ae6d352 100644
--- a/src/test/regress/expected/jit.out
+++ b/src/test/regress/expected/jit.out
@@ -396,17 +396,17 @@ SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
COMMIT;
-- check that inner/outer tuple deforming can be inferred for upper nodes, agg case
EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
- QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate
Project: count(*), count(data), string_agg(data, ':'::text); JIT-Expr: evalexpr_0_0
Phase 1 using strategy "All":
- Transition Function: int8inc(TRANS), int8inc_any(TRANS, data), string_agg_transfn(TRANS, data, ':'::text); JIT-Expr: evalexpr_0_1, JIT-Deform-Outer: false
+ Transition Function: int8inc(TRANS), int8inc_any(TRANS, data), string_agg_transfn(TRANS, data, ':'::text); JIT-Expr: evalexpr_0_1, JIT-Deform-Outer: deform_0_2
All Group
-> Seq Scan on public.jittest_simple
Output: id, data
JIT:
- Functions: 2 (2 for expression evaluation)
+ Functions: 3 (2 for expression evaluation, 1 for tuple deforming)
Options: Inlining false, Optimization false, Expressions true, Deforming true
(10 rows)
@@ -424,17 +424,17 @@ BEGIN;
SET LOCAL enable_hashagg = true;
SET LOCAL enable_sort = false;
EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
- QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------------------------------
HashAggregate
- Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: false
+ Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_1
Phase 0 using strategy "Hash":
- Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_1, JIT-Deform-Outer: false
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_3
Hash Group: jittest_simple.data; JIT-Expr: false
-> Seq Scan on public.jittest_simple
Output: id, data
JIT:
- Functions: 2 (2 for expression evaluation)
+ Functions: 4 (2 for expression evaluation, 2 for tuple deforming)
Options: Inlining false, Optimization false, Expressions true, Deforming true
(10 rows)
@@ -452,20 +452,20 @@ BEGIN;
SET LOCAL enable_hashagg = false;
SET LOCAL enable_sort = true;
EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
- QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------------------------------
GroupAggregate
- Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: false
+ Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_3
Phase 1 using strategy "Sorted Input":
- Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_4, JIT-Deform-Outer: false
- Sorted Input Group: jittest_simple.data; JIT-Expr: evalexpr_0_3, JIT-Deform-Outer: false, JIT-Deform-Inner: false
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_5, JIT-Deform-Outer: deform_0_6
+ Sorted Input Group: jittest_simple.data; JIT-Expr: evalexpr_0_4, JIT-Deform-Outer: false, JIT-Deform-Inner: false
-> Sort
Output: data, id
Sort Key: jittest_simple.data
-> Seq Scan on public.jittest_simple
Project: data, id; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
JIT:
- Functions: 5 (4 for expression evaluation, 1 for tuple deforming)
+ Functions: 7 (4 for expression evaluation, 3 for tuple deforming)
Options: Inlining false, Optimization false, Expressions true, Deforming true
(13 rows)
--
2.23.0.162.gf1d4a28250
v1-0010-jit-Fix-pessimization-of-execGrouping.c-compariso.patchtext/x-diff; charset=us-asciiDownload
From 5fc7b472ca6e335a83fab81ca767d074279ff969 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Fri, 27 Sep 2019 00:17:38 -0700
Subject: [PATCH v1 10/12] jit: Fix pessimization of execGrouping.c
comparisons.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/executor/execGrouping.c | 13 ++++++++++++-
src/test/regress/expected/jit.out | 8 +++-----
src/test/regress/sql/jit.sql | 2 --
3 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c
index 14ee8db3f98..6349c11e1d5 100644
--- a/src/backend/executor/execGrouping.c
+++ b/src/backend/executor/execGrouping.c
@@ -166,6 +166,7 @@ BuildTupleHashTableExt(PlanState *parent,
TupleHashTable hashtable;
Size entrysize = sizeof(TupleHashEntryData) + additionalsize;
MemoryContext oldcontext;
+ bool allow_jit;
Assert(nbuckets > 0);
@@ -210,13 +211,23 @@ BuildTupleHashTableExt(PlanState *parent,
hashtable->tableslot = MakeSingleTupleTableSlot(CreateTupleDescCopy(inputDesc),
&TTSOpsMinimalTuple);
+ /*
+ * If the old reset interface is used (i.e. BuildTupleHashTable, rather
+ * than BuildTupleHashTableExt), allowing JIT would lead to the generated
+ * functions to a) live longer than the query b) be re-generated each time
+ * the table is being reset. Therefore prevent JIT from being used in that
+ * case, by not providing a parent node (which prevents accessing the
+ * JitContext in the EState).
+ */
+ allow_jit = metacxt != tablecxt;
+
/* build comparator for all columns */
/* XXX: should we support non-minimal tuples for the inputslot? */
hashtable->tab_eq_func = ExecBuildGroupingEqual(inputDesc, inputDesc,
&TTSOpsMinimalTuple, &TTSOpsMinimalTuple,
numCols,
keyColIdx, eqfuncoids, collations,
- NULL);
+ allow_jit ? parent : NULL);
/*
* While not pretty, it's ok to not shut down this context, but instead
diff --git a/src/test/regress/expected/jit.out b/src/test/regress/expected/jit.out
index 4db4ae6d352..151faaa2fde 100644
--- a/src/test/regress/expected/jit.out
+++ b/src/test/regress/expected/jit.out
@@ -418,8 +418,6 @@ SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
-- Check that the equality hash-table function in a hash-aggregate can
-- be accelerated.
---
--- XXX: Unfortunately this is currently broken
BEGIN;
SET LOCAL enable_hashagg = true;
SET LOCAL enable_sort = false;
@@ -429,12 +427,12 @@ EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', '
HashAggregate
Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_1
Phase 0 using strategy "Hash":
- Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_3
- Hash Group: jittest_simple.data; JIT-Expr: false
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_5, JIT-Deform-Outer: deform_0_6
+ Hash Group: jittest_simple.data; JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_4, JIT-Deform-Inner: deform_0_3
-> Seq Scan on public.jittest_simple
Output: id, data
JIT:
- Functions: 4 (2 for expression evaluation, 2 for tuple deforming)
+ Functions: 7 (3 for expression evaluation, 4 for tuple deforming)
Options: Inlining false, Optimization false, Expressions true, Deforming true
(10 rows)
diff --git a/src/test/regress/sql/jit.sql b/src/test/regress/sql/jit.sql
index f3b9a352cf1..eb617c0ca58 100644
--- a/src/test/regress/sql/jit.sql
+++ b/src/test/regress/sql/jit.sql
@@ -144,8 +144,6 @@ SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
-- Check that the equality hash-table function in a hash-aggregate can
-- be accelerated.
---
--- XXX: Unfortunately this is currently broken
BEGIN;
SET LOCAL enable_hashagg = true;
SET LOCAL enable_sort = false;
--
2.23.0.162.gf1d4a28250
v1-0001-jit-Instrument-function-purpose-separately-add-tr.patchtext/x-diff; charset=us-asciiDownload
From c46b81d8944721450599a665502f47e46d586715 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 11:44:53 -0700
Subject: [PATCH v1 01/12] jit: Instrument function purpose separately, add
tracking of modules.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 24 +++++++++++++++++++++++-
src/backend/jit/jit.c | 3 +++
src/backend/jit/llvm/llvmjit.c | 2 ++
src/backend/jit/llvm/llvmjit_deform.c | 1 +
src/backend/jit/llvm/llvmjit_expr.c | 1 +
src/include/jit/jit.h | 11 ++++++++++-
6 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 62fb3434a32..ef65035bfba 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -825,7 +825,26 @@ ExplainPrintJIT(ExplainState *es, int jit_flags,
appendStringInfoString(es->str, "JIT:\n");
es->indent += 1;
- ExplainPropertyInteger("Functions", NULL, ji->created_functions, es);
+ /* having to emit code more than once has performance consequences */
+ if (ji->created_modules > 1)
+ ExplainPropertyInteger("Modules", NULL, ji->created_modules, es);
+
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "Functions: %zu", ji->created_functions);
+ if (ji->created_expr_functions > 0 || ji->created_deform_functions)
+ {
+ appendStringInfoString(es->str, " (");
+ if (ji->created_expr_functions)
+ {
+ appendStringInfo(es->str, "%zu for expression evaluation", ji->created_expr_functions);
+ if (ji->created_deform_functions)
+ appendStringInfoString(es->str, ", ");
+ }
+ if (ji->created_deform_functions)
+ appendStringInfo(es->str, "%zu for tuple deforming", ji->created_deform_functions);
+ appendStringInfoChar(es->str, ')');
+ }
+ appendStringInfoChar(es->str, '\n');
appendStringInfoSpaces(es->str, es->indent * 2);
appendStringInfo(es->str, "Options: %s %s, %s %s, %s %s, %s %s\n",
@@ -851,7 +870,10 @@ ExplainPrintJIT(ExplainState *es, int jit_flags,
else
{
ExplainPropertyInteger("Worker Number", NULL, worker_num, es);
+ ExplainPropertyInteger("Modules", NULL, ji->created_modules, es);
ExplainPropertyInteger("Functions", NULL, ji->created_functions, es);
+ ExplainPropertyInteger("Expression Functions", NULL, ji->created_expr_functions, es);
+ ExplainPropertyInteger("Deforming Functions", NULL, ji->created_deform_functions, es);
ExplainOpenGroup("Options", "Options", true, es);
ExplainPropertyBool("Inlining", jit_flags & PGJIT_INLINE, es);
diff --git a/src/backend/jit/jit.c b/src/backend/jit/jit.c
index 43e65b1a543..63c709002d8 100644
--- a/src/backend/jit/jit.c
+++ b/src/backend/jit/jit.c
@@ -186,7 +186,10 @@ jit_compile_expr(struct ExprState *state)
void
InstrJitAgg(JitInstrumentation *dst, JitInstrumentation *add)
{
+ dst->created_modules += add->created_modules;
dst->created_functions += add->created_functions;
+ dst->created_expr_functions += add->created_expr_functions;
+ dst->created_deform_functions += add->created_deform_functions;
INSTR_TIME_ADD(dst->generation_counter, add->generation_counter);
INSTR_TIME_ADD(dst->inlining_counter, add->inlining_counter);
INSTR_TIME_ADD(dst->optimization_counter, add->optimization_counter);
diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c
index 82c4afb7011..5489e118041 100644
--- a/src/backend/jit/llvm/llvmjit.c
+++ b/src/backend/jit/llvm/llvmjit.c
@@ -212,6 +212,8 @@ llvm_mutable_module(LLVMJitContext *context)
context->module = LLVMModuleCreateWithName("pg");
LLVMSetTarget(context->module, llvm_triple);
LLVMSetDataLayout(context->module, llvm_layout);
+
+ context->base.instr.created_modules++;
}
return context->module;
diff --git a/src/backend/jit/llvm/llvmjit_deform.c b/src/backend/jit/llvm/llvmjit_deform.c
index 835aea83e97..80a85858524 100644
--- a/src/backend/jit/llvm/llvmjit_deform.c
+++ b/src/backend/jit/llvm/llvmjit_deform.c
@@ -101,6 +101,7 @@ slot_compile_deform(LLVMJitContext *context, TupleDesc desc,
mod = llvm_mutable_module(context);
funcname = llvm_expand_funcname(context, "deform");
+ context->base.instr.created_deform_functions++;
/*
* Check which columns have to exist, so we don't have to check the row's
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index 30133634c70..7efc8f23ee3 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -144,6 +144,7 @@ llvm_compile_expr(ExprState *state)
b = LLVMCreateBuilder();
funcname = llvm_expand_funcname(context, "evalexpr");
+ context->base.instr.created_expr_functions++;
/* Create the signature and function */
{
diff --git a/src/include/jit/jit.h b/src/include/jit/jit.h
index d879cef20f3..668f965cb0a 100644
--- a/src/include/jit/jit.h
+++ b/src/include/jit/jit.h
@@ -26,9 +26,18 @@
typedef struct JitInstrumentation
{
- /* number of emitted functions */
+ /* number of modules (i.e. separate optimize / link cycles) created */
+ size_t created_modules;
+
+ /* number of functions generated */
size_t created_functions;
+ /* number of expression evaluation functions generated */
+ size_t created_expr_functions;
+
+ /* number of tuple deforming functions generated */
+ size_t created_deform_functions;
+
/* accumulated time to generate code */
instr_time generation_counter;
--
2.23.0.162.gf1d4a28250
v1-0002-Refactor-explain.c-to-pass-ExprState-down-to-show.patchtext/x-diff; charset=us-asciiDownload
From 7f8173ac12e88d9efc4659d40e42f3573eb4fa47 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 11:56:53 -0700
Subject: [PATCH v1 02/12] Refactor explain.c to pass ExprState down to
show_expression() where available.
This will, in a later patch, allow to display per-expression
information about JIT compilation.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 105 ++++++++++++++++++++++-----------
1 file changed, 69 insertions(+), 36 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index ef65035bfba..48283ba82a6 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -66,16 +66,16 @@ static void ExplainNode(PlanState *planstate, List *ancestors,
ExplainState *es);
static void show_plan_tlist(PlanState *planstate, List *ancestors,
ExplainState *es);
-static void show_expression(Node *node, const char *qlabel,
+static void show_expression(Node *node, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
-static void show_qual(List *qual, const char *qlabel,
+static void show_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
-static void show_scan_qual(List *qual, const char *qlabel,
+static void show_scan_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es);
-static void show_upper_qual(List *qual, const char *qlabel,
+static void show_upper_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es);
static void show_sort_keys(SortState *sortstate, List *ancestors,
@@ -1605,26 +1605,31 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
case T_IndexScan:
show_scan_qual(((IndexScan *) plan)->indexqualorig,
+ ((IndexScanState *) planstate)->indexqualorig,
"Index Cond", planstate, ancestors, es);
if (((IndexScan *) plan)->indexqualorig)
show_instrumentation_count("Rows Removed by Index Recheck", 2,
planstate, es);
show_scan_qual(((IndexScan *) plan)->indexorderbyorig,
+ NULL,
"Order By", planstate, ancestors, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
break;
case T_IndexOnlyScan:
show_scan_qual(((IndexOnlyScan *) plan)->indexqual,
+ ((IndexOnlyScanState *) planstate)->indexqual,
"Index Cond", planstate, ancestors, es);
if (((IndexOnlyScan *) plan)->indexqual)
show_instrumentation_count("Rows Removed by Index Recheck", 2,
planstate, es);
- show_scan_qual(((IndexOnlyScan *) plan)->indexorderby,
+ show_scan_qual(((IndexOnlyScan *) plan)->indexorderby, NULL,
"Order By", planstate, ancestors, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1633,16 +1638,18 @@ ExplainNode(PlanState *planstate, List *ancestors,
planstate->instrument->ntuples2, 0, es);
break;
case T_BitmapIndexScan:
- show_scan_qual(((BitmapIndexScan *) plan)->indexqualorig,
+ show_scan_qual(((BitmapIndexScan *) plan)->indexqualorig, NULL,
"Index Cond", planstate, ancestors, es);
break;
case T_BitmapHeapScan:
show_scan_qual(((BitmapHeapScan *) plan)->bitmapqualorig,
+ ((BitmapHeapScanState *) planstate)->bitmapqualorig,
"Recheck Cond", planstate, ancestors, es);
if (((BitmapHeapScan *) plan)->bitmapqualorig)
show_instrumentation_count("Rows Removed by Index Recheck", 2,
planstate, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1660,7 +1667,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_NamedTuplestoreScan:
case T_WorkTableScan:
case T_SubqueryScan:
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1669,7 +1677,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
Gather *gather = (Gather *) plan;
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1715,7 +1724,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
GatherMerge *gm = (GatherMerge *) plan;
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1749,11 +1759,12 @@ ExplainNode(PlanState *planstate, List *ancestors,
fexprs = lappend(fexprs, rtfunc->funcexpr);
}
/* We rely on show_expression to insert commas as needed */
- show_expression((Node *) fexprs,
+ show_expression((Node *) fexprs, NULL,
"Function Call", planstate, ancestors,
es->verbose, es);
}
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1763,11 +1774,12 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
TableFunc *tablefunc = ((TableFuncScan *) plan)->tablefunc;
- show_expression((Node *) tablefunc,
+ show_expression((Node *) tablefunc, NULL,
"Table Function Call", planstate, ancestors,
es->verbose, es);
}
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1782,15 +1794,18 @@ ExplainNode(PlanState *planstate, List *ancestors,
if (list_length(tidquals) > 1)
tidquals = list_make1(make_orclause(tidquals));
- show_scan_qual(tidquals, "TID Cond", planstate, ancestors, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(tidquals, NULL, "TID Cond", planstate,
+ ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
}
break;
case T_ForeignScan:
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1800,7 +1815,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
CustomScanState *css = (CustomScanState *) planstate;
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1810,51 +1826,60 @@ ExplainNode(PlanState *planstate, List *ancestors,
break;
case T_NestLoop:
show_upper_qual(((NestLoop *) plan)->join.joinqual,
+ ((NestLoopState *) planstate)->js.joinqual,
"Join Filter", planstate, ancestors, es);
if (((NestLoop *) plan)->join.joinqual)
show_instrumentation_count("Rows Removed by Join Filter", 1,
planstate, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
break;
case T_MergeJoin:
- show_upper_qual(((MergeJoin *) plan)->mergeclauses,
+ show_upper_qual(((MergeJoin *) plan)->mergeclauses, NULL,
"Merge Cond", planstate, ancestors, es);
show_upper_qual(((MergeJoin *) plan)->join.joinqual,
+ ((MergeJoinState *) planstate)->js.joinqual,
"Join Filter", planstate, ancestors, es);
if (((MergeJoin *) plan)->join.joinqual)
show_instrumentation_count("Rows Removed by Join Filter", 1,
planstate, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
break;
case T_HashJoin:
show_upper_qual(((HashJoin *) plan)->hashclauses,
+ ((HashJoinState *) planstate)->hashclauses,
"Hash Cond", planstate, ancestors, es);
show_upper_qual(((HashJoin *) plan)->join.joinqual,
+ ((HashJoinState *) planstate)->js.joinqual,
"Join Filter", planstate, ancestors, es);
if (((HashJoin *) plan)->join.joinqual)
show_instrumentation_count("Rows Removed by Join Filter", 1,
planstate, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
break;
case T_Agg:
show_agg_keys(castNode(AggState, planstate), ancestors, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
break;
case T_Group:
show_group_keys(castNode(GroupState, planstate), ancestors, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1869,8 +1894,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
break;
case T_Result:
show_upper_qual((List *) ((Result *) plan)->resconstantqual,
+ ((ResultState *) planstate)->resconstantqual,
"One-Time Filter", planstate, ancestors, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -2120,13 +2147,15 @@ show_plan_tlist(PlanState *planstate, List *ancestors, ExplainState *es)
* Show a generic expression
*/
static void
-show_expression(Node *node, const char *qlabel,
+show_expression(Node *node, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es)
{
List *context;
char *exprstr;
+ Assert(expr == NULL || IsA(expr, ExprState));
+
/* Set up deparsing context */
context = set_deparse_context_planstate(es->deparse_cxt,
(Node *) planstate,
@@ -2143,7 +2172,7 @@ show_expression(Node *node, const char *qlabel,
* Show a qualifier expression (which is a List with implicit AND semantics)
*/
static void
-show_qual(List *qual, const char *qlabel,
+show_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es)
{
@@ -2153,39 +2182,43 @@ show_qual(List *qual, const char *qlabel,
if (qual == NIL)
return;
+ Assert(expr == NULL ||
+ (IsA(expr, ExprState) &&
+ (expr->flags & EEO_FLAG_IS_QUAL)));
+
/* Convert AND list to explicit AND */
node = (Node *) make_ands_explicit(qual);
/* And show it */
- show_expression(node, qlabel, planstate, ancestors, useprefix, es);
+ show_expression(node, expr, qlabel, planstate, ancestors, useprefix, es);
}
/*
* Show a qualifier expression for a scan plan node
*/
static void
-show_scan_qual(List *qual, const char *qlabel,
+show_scan_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es)
{
bool useprefix;
useprefix = (IsA(planstate->plan, SubqueryScan) ||es->verbose);
- show_qual(qual, qlabel, planstate, ancestors, useprefix, es);
+ show_qual(qual, expr, qlabel, planstate, ancestors, useprefix, es);
}
/*
* Show a qualifier expression for an upper-level plan node
*/
static void
-show_upper_qual(List *qual, const char *qlabel,
+show_upper_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es)
{
bool useprefix;
useprefix = (list_length(es->rtable) > 1 || es->verbose);
- show_qual(qual, qlabel, planstate, ancestors, useprefix, es);
+ show_qual(qual, expr, qlabel, planstate, ancestors, useprefix, es);
}
/*
@@ -3300,8 +3333,8 @@ show_modifytable_info(ModifyTableState *mtstate, List *ancestors,
/* ON CONFLICT DO UPDATE WHERE qual is specially displayed */
if (node->onConflictWhere)
{
- show_upper_qual((List *) node->onConflictWhere, "Conflict Filter",
- &mtstate->ps, ancestors, es);
+ show_upper_qual((List *) node->onConflictWhere, NULL,
+ "Conflict Filter", &mtstate->ps, ancestors, es);
show_instrumentation_count("Rows Removed by Conflict Filter", 1, &mtstate->ps, es);
}
--
2.23.0.162.gf1d4a28250
v1-0003-Explain-Differentiate-between-a-node-projecting-o.patchtext/x-diff; charset=us-asciiDownload
From 295cbb73faec719210d44b361bad3042a65617bc Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 12:02:11 -0700
Subject: [PATCH v1 03/12] Explain: Differentiate between a node projecting or
not.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 5 +-
src/test/regress/expected/aggregates.out | 12 +-
src/test/regress/expected/alter_table.out | 12 +-
.../regress/expected/create_function_3.out | 6 +-
src/test/regress/expected/domain.out | 12 +-
src/test/regress/expected/fast_default.out | 10 +-
src/test/regress/expected/inherit.out | 64 ++--
src/test/regress/expected/join.out | 280 +++++++++---------
src/test/regress/expected/join_hash.out | 40 +--
src/test/regress/expected/limit.out | 22 +-
src/test/regress/expected/plpgsql.out | 14 +-
src/test/regress/expected/rangefuncs.out | 10 +-
src/test/regress/expected/rowsecurity.out | 4 +-
src/test/regress/expected/rowtypes.out | 8 +-
src/test/regress/expected/select_distinct.out | 10 +-
src/test/regress/expected/select_parallel.out | 14 +-
src/test/regress/expected/subselect.out | 118 ++++----
src/test/regress/expected/tsrf.out | 24 +-
src/test/regress/expected/updatable_views.out | 30 +-
src/test/regress/expected/update.out | 10 +-
src/test/regress/expected/with.out | 30 +-
src/test/regress/expected/xml.out | 8 +-
src/test/regress/expected/xml_2.out | 8 +-
23 files changed, 377 insertions(+), 374 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 48283ba82a6..ea6b39d5abb 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -2140,7 +2140,10 @@ show_plan_tlist(PlanState *planstate, List *ancestors, ExplainState *es)
}
/* Print results */
- ExplainPropertyList("Output", result, es);
+ if (planstate->ps_ProjInfo)
+ ExplainPropertyList("Project", result, es);
+ else
+ ExplainPropertyList("Output", result, es);
}
/*
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index be4ddf86a43..683bcaedf5f 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -510,12 +510,12 @@ order by 1, 2;
Output: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
Sort Key: s1.s1, s2.s2
-> Nested Loop
- Output: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
+ Project: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
-> Function Scan on pg_catalog.generate_series s1
Output: s1.s1
Function Call: generate_series(1, 3)
-> HashAggregate
- Output: s2.s2, sum((s1.s1 + s2.s2))
+ Project: s2.s2, sum((s1.s1 + s2.s2))
Group Key: s2.s2
-> Function Scan on pg_catalog.generate_series s2
Output: s2.s2
@@ -547,14 +547,14 @@ select array(select sum(x+y) s
QUERY PLAN
-------------------------------------------------------------------
Function Scan on pg_catalog.generate_series x
- Output: (SubPlan 1)
+ Project: (SubPlan 1)
Function Call: generate_series(1, 3)
SubPlan 1
-> Sort
Output: (sum((x.x + y.y))), y.y
Sort Key: (sum((x.x + y.y)))
-> HashAggregate
- Output: sum((x.x + y.y)), y.y
+ Project: sum((x.x + y.y)), y.y
Group Key: y.y
-> Function Scan on pg_catalog.generate_series y
Output: y.y
@@ -2253,12 +2253,12 @@ EXPLAIN (COSTS OFF, VERBOSE)
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate
- Output: variance(unique1), sum((unique1)::bigint), regr_count((unique1)::double precision, (unique1)::double precision)
+ Project: variance(unique1), sum((unique1)::bigint), regr_count((unique1)::double precision, (unique1)::double precision)
-> Gather
Output: (PARTIAL variance(unique1)), (PARTIAL sum((unique1)::bigint)), (PARTIAL regr_count((unique1)::double precision, (unique1)::double precision))
Workers Planned: 4
-> Partial Aggregate
- Output: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint), PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)
+ Project: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint), PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)
-> Parallel Seq Scan on public.tenk1
Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
(9 rows)
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 23d4265555c..4b197166041 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2381,10 +2381,10 @@ View definition:
FROM at_view_1 v1;
explain (verbose, costs off) select * from at_view_2;
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------
Seq Scan on public.at_base_table bt
- Output: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff))
+ Project: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff))
(2 rows)
select * from at_view_2;
@@ -2421,10 +2421,10 @@ View definition:
FROM at_view_1 v1;
explain (verbose, costs off) select * from at_view_2;
- QUERY PLAN
-----------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------
Seq Scan on public.at_base_table bt
- Output: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff, NULL))
+ Project: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff, NULL))
(2 rows)
select * from at_view_2;
diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out
index ba260df9960..4def18f0e0b 100644
--- a/src/test/regress/expected/create_function_3.out
+++ b/src/test/regress/expected/create_function_3.out
@@ -303,10 +303,10 @@ SELECT voidtest2(11,22);
-- currently, we can inline voidtest2 but not voidtest1
EXPLAIN (verbose, costs off) SELECT voidtest2(11,22);
- QUERY PLAN
--------------------------
+ QUERY PLAN
+--------------------------
Result
- Output: voidtest1(33)
+ Project: voidtest1(33)
(2 rows)
CREATE TEMP TABLE sometable(f1 int);
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 4ff1b4af418..346ccac9279 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -261,11 +261,11 @@ select * from dcomptable;
explain (verbose, costs off)
update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0;
- QUERY PLAN
------------------------------------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------------------------------
Update on public.dcomptable
-> Seq Scan on public.dcomptable
- Output: ROW(((d1).r - '1'::double precision), ((d1).i + '1'::double precision)), ctid
+ Project: ROW(((d1).r - '1'::double precision), ((d1).i + '1'::double precision)), ctid
Filter: ((dcomptable.d1).i > '0'::double precision)
(4 rows)
@@ -397,11 +397,11 @@ select * from dcomptable;
explain (verbose, costs off)
update dcomptable set d1[1].r = d1[1].r - 1, d1[1].i = d1[1].i + 1
where d1[1].i > 0;
- QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------------------
Update on public.dcomptable
-> Seq Scan on public.dcomptable
- Output: (d1[1].r := (d1[1].r - '1'::double precision))[1].i := (d1[1].i + '1'::double precision), ctid
+ Project: (d1[1].r := (d1[1].r - '1'::double precision))[1].i := (d1[1].i + '1'::double precision), ctid
Filter: (dcomptable.d1[1].i > '0'::double precision)
(4 rows)
diff --git a/src/test/regress/expected/fast_default.out b/src/test/regress/expected/fast_default.out
index 10bc5ff757c..177f8911a94 100644
--- a/src/test/regress/expected/fast_default.out
+++ b/src/test/regress/expected/fast_default.out
@@ -300,7 +300,7 @@ SELECT c_bigint, c_text FROM T WHERE c_bigint = -1 LIMIT 1;
Limit
Output: c_bigint, c_text
-> Seq Scan on fast_default.t
- Output: c_bigint, c_text
+ Project: c_bigint, c_text
Filter: (t.c_bigint = '-1'::integer)
(5 rows)
@@ -316,7 +316,7 @@ EXPLAIN (VERBOSE TRUE, COSTS FALSE) SELECT c_bigint, c_text FROM T WHERE c_text
Limit
Output: c_bigint, c_text
-> Seq Scan on fast_default.t
- Output: c_bigint, c_text
+ Project: c_bigint, c_text
Filter: (t.c_text = 'hello'::text)
(5 rows)
@@ -371,7 +371,7 @@ SELECT * FROM T ORDER BY c_bigint, c_text, pk LIMIT 10;
Output: pk, c_bigint, c_text
Sort Key: t.c_bigint, t.c_text, t.pk
-> Seq Scan on fast_default.t
- Output: pk, c_bigint, c_text
+ Project: pk, c_bigint, c_text
(7 rows)
-- LIMIT
@@ -400,7 +400,7 @@ SELECT * FROM T WHERE c_bigint > -1 ORDER BY c_bigint, c_text, pk LIMIT 10;
Output: pk, c_bigint, c_text
Sort Key: t.c_bigint, t.c_text, t.pk
-> Seq Scan on fast_default.t
- Output: pk, c_bigint, c_text
+ Project: pk, c_bigint, c_text
Filter: (t.c_bigint > '-1'::integer)
(8 rows)
@@ -428,7 +428,7 @@ DELETE FROM T WHERE pk BETWEEN 10 AND 20 RETURNING *;
Delete on fast_default.t
Output: pk, c_bigint, c_text
-> Bitmap Heap Scan on fast_default.t
- Output: ctid
+ Project: ctid
Recheck Cond: ((t.pk >= 10) AND (t.pk <= 20))
-> Bitmap Index Scan on t_pkey
Index Cond: ((t.pk >= 10) AND (t.pk <= 20))
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 44d51ed7110..4b8351839a8 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -545,25 +545,25 @@ create table some_tab_child () inherits (some_tab);
insert into some_tab_child values(1,2);
explain (verbose, costs off)
update some_tab set a = a + 1 where false;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+-----------------------------------
Update on public.some_tab
Update on public.some_tab
-> Result
- Output: (a + 1), b, ctid
+ Project: (a + 1), b, ctid
One-Time Filter: false
(5 rows)
update some_tab set a = a + 1 where false;
explain (verbose, costs off)
update some_tab set a = a + 1 where false returning b, a;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+-----------------------------------
Update on public.some_tab
Output: b, a
Update on public.some_tab
-> Result
- Output: (a + 1), b, ctid
+ Project: (a + 1), b, ctid
One-Time Filter: false
(6 rows)
@@ -792,17 +792,17 @@ select NULL::derived::base;
-- remove redundant conversions.
explain (verbose on, costs off) select row(i, b)::more_derived::derived::base from more_derived;
- QUERY PLAN
--------------------------------------------
+ QUERY PLAN
+--------------------------------------------
Seq Scan on public.more_derived
- Output: (ROW(i, b)::more_derived)::base
+ Project: (ROW(i, b)::more_derived)::base
(2 rows)
explain (verbose on, costs off) select (1, 2)::more_derived::derived::base;
- QUERY PLAN
------------------------
+ QUERY PLAN
+------------------------
Result
- Output: '(1)'::base
+ Project: '(1)'::base
(2 rows)
drop table more_derived;
@@ -1405,13 +1405,13 @@ insert into matest3 (name) values ('Test 5');
insert into matest3 (name) values ('Test 6');
set enable_indexscan = off; -- force use of seqscan/sort, so no merge
explain (verbose, costs off) select * from matest0 order by 1-id;
- QUERY PLAN
-------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------
Sort
Output: matest0.id, matest0.name, ((1 - matest0.id))
Sort Key: ((1 - matest0.id))
-> Result
- Output: matest0.id, matest0.name, (1 - matest0.id)
+ Project: matest0.id, matest0.name, (1 - matest0.id)
-> Append
-> Seq Scan on public.matest0
Output: matest0.id, matest0.name
@@ -1438,16 +1438,16 @@ explain (verbose, costs off) select min(1-id) from matest0;
QUERY PLAN
----------------------------------------
Aggregate
- Output: min((1 - matest0.id))
+ Project: min((1 - matest0.id))
-> Append
-> Seq Scan on public.matest0
- Output: matest0.id
+ Project: matest0.id
-> Seq Scan on public.matest1
- Output: matest1.id
+ Project: matest1.id
-> Seq Scan on public.matest2
- Output: matest2.id
+ Project: matest2.id
-> Seq Scan on public.matest3
- Output: matest3.id
+ Project: matest3.id
(11 rows)
select min(1-id) from matest0;
@@ -1460,21 +1460,21 @@ reset enable_indexscan;
set enable_seqscan = off; -- plan with fewest seqscans should be merge
set enable_parallel_append = off; -- Don't let parallel-append interfere
explain (verbose, costs off) select * from matest0 order by 1-id;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Merge Append
Sort Key: ((1 - matest0.id))
-> Index Scan using matest0i on public.matest0
- Output: matest0.id, matest0.name, (1 - matest0.id)
+ Project: matest0.id, matest0.name, (1 - matest0.id)
-> Index Scan using matest1i on public.matest1
- Output: matest1.id, matest1.name, (1 - matest1.id)
+ Project: matest1.id, matest1.name, (1 - matest1.id)
-> Sort
Output: matest2.id, matest2.name, ((1 - matest2.id))
Sort Key: ((1 - matest2.id))
-> Seq Scan on public.matest2
- Output: matest2.id, matest2.name, (1 - matest2.id)
+ Project: matest2.id, matest2.name, (1 - matest2.id)
-> Index Scan using matest3i on public.matest3
- Output: matest3.id, matest3.name, (1 - matest3.id)
+ Project: matest3.id, matest3.name, (1 - matest3.id)
(13 rows)
select * from matest0 order by 1-id;
@@ -1492,29 +1492,29 @@ explain (verbose, costs off) select min(1-id) from matest0;
QUERY PLAN
--------------------------------------------------------------------------
Result
- Output: $0
+ Project: $0
InitPlan 1 (returns $0)
-> Limit
Output: ((1 - matest0.id))
-> Result
- Output: ((1 - matest0.id))
+ Project: ((1 - matest0.id))
-> Merge Append
Sort Key: ((1 - matest0.id))
-> Index Scan using matest0i on public.matest0
- Output: matest0.id, (1 - matest0.id)
+ Project: matest0.id, (1 - matest0.id)
Index Cond: ((1 - matest0.id) IS NOT NULL)
-> Index Scan using matest1i on public.matest1
- Output: matest1.id, (1 - matest1.id)
+ Project: matest1.id, (1 - matest1.id)
Index Cond: ((1 - matest1.id) IS NOT NULL)
-> Sort
Output: matest2.id, ((1 - matest2.id))
Sort Key: ((1 - matest2.id))
-> Bitmap Heap Scan on public.matest2
- Output: matest2.id, (1 - matest2.id)
+ Project: matest2.id, (1 - matest2.id)
Filter: ((1 - matest2.id) IS NOT NULL)
-> Bitmap Index Scan on matest2_pkey
-> Index Scan using matest3i on public.matest3
- Output: matest3.id, (1 - matest3.id)
+ Project: matest3.id, (1 - matest3.id)
Index Cond: ((1 - matest3.id) IS NOT NULL)
(25 rows)
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
index b58d560163b..7f319a79938 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -3264,9 +3264,9 @@ where x = unique1;
QUERY PLAN
-----------------------------------------------------------
Nested Loop
- Output: tenk1.unique1, (1), (random())
+ Project: tenk1.unique1, (1), (random())
-> Result
- Output: 1, random()
+ Project: 1, random()
-> Index Only Scan using tenk1_unique1 on public.tenk1
Output: tenk1.unique1
Index Cond: (tenk1.unique1 = (1))
@@ -3740,14 +3740,14 @@ using (join_key);
QUERY PLAN
--------------------------------------------------------------------------
Nested Loop Left Join
- Output: "*VALUES*".column1, i1.f1, (666)
+ Project: "*VALUES*".column1, i1.f1, (666)
Join Filter: ("*VALUES*".column1 = i1.f1)
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1
-> Materialize
Output: i1.f1, (666)
-> Nested Loop Left Join
- Output: i1.f1, 666
+ Project: i1.f1, 666
-> Seq Scan on public.int4_tbl i1
Output: i1.f1
-> Index Only Scan using tenk1_unique2 on public.tenk1 i2
@@ -3787,34 +3787,34 @@ select t1.* from
on (t1.f1 = b1.d1)
left join int4_tbl i4
on (i8.q2 = i4.f1);
- QUERY PLAN
-----------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------
Hash Left Join
- Output: t1.f1
+ Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q2
+ Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
Output: i8.q2
-> Hash Right Join
- Output: i8.q2
+ Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
-> Hash Join
- Output: i8.q2, (NULL::integer)
+ Project: i8.q2, (NULL::integer)
Hash Cond: (i8.q1 = i8b2.q1)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b2.q1, (NULL::integer)
-> Seq Scan on public.int8_tbl i8b2
- Output: i8b2.q1, NULL::integer
+ Project: i8b2.q1, NULL::integer
-> Hash
Output: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
- Output: i8b1.q2
+ Project: i8b1.q2
-> Hash
Output: i4.f1
-> Seq Scan on public.int4_tbl i4
@@ -3851,23 +3851,23 @@ select t1.* from
QUERY PLAN
----------------------------------------------------------------------------
Hash Left Join
- Output: t1.f1
+ Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q2
+ Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
Output: i8.q2
-> Hash Right Join
- Output: i8.q2
+ Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
-> Hash Right Join
- Output: i8.q2, (NULL::integer)
+ Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
-> Nested Loop
- Output: i8b2.q1, NULL::integer
+ Project: i8b2.q1, NULL::integer
-> Seq Scan on public.int8_tbl i8b2
Output: i8b2.q1, i8b2.q2
-> Materialize
@@ -3879,7 +3879,7 @@ select t1.* from
-> Hash
Output: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
- Output: i8b1.q2
+ Project: i8b1.q2
-> Hash
Output: i4.f1
-> Seq Scan on public.int4_tbl i4
@@ -3917,23 +3917,23 @@ select t1.* from
QUERY PLAN
----------------------------------------------------------------------------
Hash Left Join
- Output: t1.f1
+ Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q2
+ Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
Output: i8.q2
-> Hash Right Join
- Output: i8.q2
+ Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
-> Hash Right Join
- Output: i8.q2, (NULL::integer)
+ Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
-> Hash Join
- Output: i8b2.q1, NULL::integer
+ Project: i8b2.q1, NULL::integer
Hash Cond: (i8b2.q1 = i4b2.f1)
-> Seq Scan on public.int8_tbl i8b2
Output: i8b2.q1, i8b2.q2
@@ -3948,7 +3948,7 @@ select t1.* from
-> Hash
Output: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
- Output: i8b1.q2
+ Project: i8b1.q2
-> Hash
Output: i4.f1
-> Seq Scan on public.int4_tbl i4
@@ -3984,15 +3984,15 @@ select * from
QUERY PLAN
--------------------------------------------------------
Nested Loop Left Join
- Output: t1.f1, i8.q1, i8.q2, t2.f1, i4.f1
+ Project: t1.f1, i8.q1, i8.q2, t2.f1, i4.f1
-> Seq Scan on public.text_tbl t2
Output: t2.f1
-> Materialize
Output: i8.q1, i8.q2, i4.f1, t1.f1
-> Nested Loop
- Output: i8.q1, i8.q2, i4.f1, t1.f1
+ Project: i8.q1, i8.q2, i4.f1, t1.f1
-> Nested Loop Left Join
- Output: i8.q1, i8.q2, i4.f1
+ Project: i8.q1, i8.q2, i4.f1
Join Filter: (i8.q1 = i4.f1)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
@@ -4031,10 +4031,10 @@ where t1.f1 = ss.f1;
QUERY PLAN
--------------------------------------------------
Nested Loop
- Output: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
+ Project: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
Join Filter: (t1.f1 = t2.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q1, i8.q2
+ Project: t1.f1, i8.q1, i8.q2
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
@@ -4045,7 +4045,7 @@ where t1.f1 = ss.f1;
-> Limit
Output: (i8.q1), t2.f1
-> Seq Scan on public.text_tbl t2
- Output: i8.q1, t2.f1
+ Project: i8.q1, t2.f1
(16 rows)
select * from
@@ -4067,15 +4067,15 @@ select * from
lateral (select i8.q1, t2.f1 from text_tbl t2 limit 1) as ss1,
lateral (select ss1.* from text_tbl t3 limit 1) as ss2
where t1.f1 = ss2.f1;
- QUERY PLAN
--------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------
Nested Loop
- Output: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1, ((i8.q1)), (t2.f1)
+ Project: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1, ((i8.q1)), (t2.f1)
Join Filter: (t1.f1 = (t2.f1))
-> Nested Loop
- Output: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
+ Project: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
-> Nested Loop Left Join
- Output: t1.f1, i8.q1, i8.q2
+ Project: t1.f1, i8.q1, i8.q2
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
@@ -4086,11 +4086,11 @@ where t1.f1 = ss2.f1;
-> Limit
Output: (i8.q1), t2.f1
-> Seq Scan on public.text_tbl t2
- Output: i8.q1, t2.f1
+ Project: i8.q1, t2.f1
-> Limit
Output: ((i8.q1)), (t2.f1)
-> Seq Scan on public.text_tbl t3
- Output: (i8.q1), t2.f1
+ Project: (i8.q1), t2.f1
(22 rows)
select * from
@@ -4116,11 +4116,11 @@ where tt1.f1 = ss1.c0;
QUERY PLAN
----------------------------------------------------------
Nested Loop
- Output: 1
+ Project: 1
-> Nested Loop Left Join
- Output: tt1.f1, tt4.f1
+ Project: tt1.f1, tt4.f1
-> Nested Loop
- Output: tt1.f1
+ Project: tt1.f1
-> Seq Scan on public.text_tbl tt1
Output: tt1.f1
Filter: (tt1.f1 = 'foo'::text)
@@ -4129,7 +4129,7 @@ where tt1.f1 = ss1.c0;
-> Materialize
Output: tt4.f1
-> Nested Loop Left Join
- Output: tt4.f1
+ Project: tt4.f1
Join Filter: (tt3.f1 = tt4.f1)
-> Seq Scan on public.text_tbl tt3
Output: tt3.f1
@@ -4143,7 +4143,7 @@ where tt1.f1 = ss1.c0;
-> Limit
Output: (tt4.f1)
-> Seq Scan on public.text_tbl tt5
- Output: tt4.f1
+ Project: tt4.f1
(29 rows)
select 1 from
@@ -4173,14 +4173,14 @@ where ss1.c2 = 0;
QUERY PLAN
------------------------------------------------------------------------
Nested Loop
- Output: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
+ Project: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
-> Hash Join
- Output: i41.f1, i42.f1, i8.q1, i8.q2, i43.f1, 42
+ Project: i41.f1, i42.f1, i8.q1, i8.q2, i43.f1, 42
Hash Cond: (i41.f1 = i42.f1)
-> Nested Loop
- Output: i8.q1, i8.q2, i43.f1, i41.f1
+ Project: i8.q1, i8.q2, i43.f1, i41.f1
-> Nested Loop
- Output: i8.q1, i8.q2, i43.f1
+ Project: i8.q1, i8.q2, i43.f1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
Filter: (i8.q1 = 0)
@@ -4196,7 +4196,7 @@ where ss1.c2 = 0;
-> Limit
Output: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
-> Seq Scan on public.text_tbl
- Output: i41.f1, i8.q1, i8.q2, i42.f1, i43.f1, (42)
+ Project: i41.f1, i8.q1, i8.q2, i42.f1, i43.f1, (42)
(25 rows)
select ss2.* from
@@ -4281,22 +4281,22 @@ explain (verbose, costs off)
select a.q2, b.q1
from int8_tbl a left join int8_tbl b on a.q2 = coalesce(b.q1, 1)
where coalesce(b.q1, 1) > 0;
- QUERY PLAN
----------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------
Merge Left Join
- Output: a.q2, b.q1
+ Project: a.q2, b.q1
Merge Cond: (a.q2 = (COALESCE(b.q1, '1'::bigint)))
Filter: (COALESCE(b.q1, '1'::bigint) > 0)
-> Sort
Output: a.q2
Sort Key: a.q2
-> Seq Scan on public.int8_tbl a
- Output: a.q2
+ Project: a.q2
-> Sort
Output: b.q1, (COALESCE(b.q1, '1'::bigint))
Sort Key: (COALESCE(b.q1, '1'::bigint))
-> Seq Scan on public.int8_tbl b
- Output: b.q1, COALESCE(b.q1, '1'::bigint)
+ Project: b.q1, COALESCE(b.q1, '1'::bigint)
(14 rows)
select a.q2, b.q1
@@ -5189,14 +5189,14 @@ explain (verbose, costs off)
select * from
int8_tbl a left join
lateral (select *, a.q2 as x from int8_tbl b) ss on a.q2 = ss.q1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop Left Join
- Output: a.q1, a.q2, b.q1, b.q2, (a.q2)
+ Project: a.q1, a.q2, b.q1, b.q2, (a.q2)
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl b
- Output: b.q1, b.q2, a.q2
+ Project: b.q1, b.q2, a.q2
Filter: (a.q2 = b.q1)
(7 rows)
@@ -5221,14 +5221,14 @@ explain (verbose, costs off)
select * from
int8_tbl a left join
lateral (select *, coalesce(a.q2, 42) as x from int8_tbl b) ss on a.q2 = ss.q1;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Nested Loop Left Join
- Output: a.q1, a.q2, b.q1, b.q2, (COALESCE(a.q2, '42'::bigint))
+ Project: a.q1, a.q2, b.q1, b.q2, (COALESCE(a.q2, '42'::bigint))
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl b
- Output: b.q1, b.q2, COALESCE(a.q2, '42'::bigint)
+ Project: b.q1, b.q2, COALESCE(a.q2, '42'::bigint)
Filter: (a.q2 = b.q1)
(7 rows)
@@ -5257,7 +5257,7 @@ select * from int4_tbl i left join
QUERY PLAN
-------------------------------------------
Hash Left Join
- Output: i.f1, j.f1
+ Project: i.f1, j.f1
Hash Cond: (i.f1 = j.f1)
-> Seq Scan on public.int4_tbl i
Output: i.f1
@@ -5281,14 +5281,14 @@ select * from int4_tbl i left join
explain (verbose, costs off)
select * from int4_tbl i left join
lateral (select coalesce(i) from int2_tbl j where i.f1 = j.f1) k on true;
- QUERY PLAN
--------------------------------------
+ QUERY PLAN
+--------------------------------------
Nested Loop Left Join
- Output: i.f1, (COALESCE(i.*))
+ Project: i.f1, (COALESCE(i.*))
-> Seq Scan on public.int4_tbl i
- Output: i.f1, i.*
+ Project: i.f1, i.*
-> Seq Scan on public.int2_tbl j
- Output: j.f1, COALESCE(i.*)
+ Project: j.f1, COALESCE(i.*)
Filter: (i.f1 = j.f1)
(7 rows)
@@ -5311,11 +5311,11 @@ select * from int4_tbl a,
QUERY PLAN
-------------------------------------------------
Nested Loop
- Output: a.f1, b.f1, c.q1, c.q2
+ Project: a.f1, b.f1, c.q1, c.q2
-> Seq Scan on public.int4_tbl a
Output: a.f1
-> Hash Left Join
- Output: b.f1, c.q1, c.q2
+ Project: b.f1, c.q1, c.q2
Hash Cond: (b.f1 = c.q1)
-> Seq Scan on public.int4_tbl b
Output: b.f1
@@ -5366,14 +5366,14 @@ select * from
(select b.q1 as bq1, c.q1 as cq1, least(a.q1,b.q1,c.q1) from
int8_tbl b cross join int8_tbl c) ss
on a.q2 = ss.bq1;
- QUERY PLAN
--------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------
Nested Loop Left Join
- Output: a.q1, a.q2, b.q1, c.q1, (LEAST(a.q1, b.q1, c.q1))
+ Project: a.q1, a.q2, b.q1, c.q1, (LEAST(a.q1, b.q1, c.q1))
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Nested Loop
- Output: b.q1, c.q1, LEAST(a.q1, b.q1, c.q1)
+ Project: b.q1, c.q1, LEAST(a.q1, b.q1, c.q1)
-> Seq Scan on public.int8_tbl b
Output: b.q1, b.q2
Filter: (a.q2 = b.q1)
@@ -5442,32 +5442,32 @@ select * from
lateral (select q1, coalesce(ss1.x,q2) as y from int8_tbl d) ss2
) on c.q2 = ss2.q1,
lateral (select ss2.y offset 0) ss3;
- QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint)), d.q1, (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)), ((COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)))
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint)), d.q1, (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)), ((COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)))
-> Hash Right Join
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
-> Nested Loop
- Output: a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
+ Project: a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
-> Hash Left Join
- Output: a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint))
+ Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint))
Hash Cond: (a.q2 = b.q1)
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Hash
Output: b.q1, (COALESCE(b.q2, '42'::bigint))
-> Seq Scan on public.int8_tbl b
- Output: b.q1, COALESCE(b.q2, '42'::bigint)
+ Project: b.q1, COALESCE(b.q2, '42'::bigint)
-> Seq Scan on public.int8_tbl d
- Output: d.q1, COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)
+ Project: d.q1, COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
-> Result
- Output: (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
+ Project: (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
(24 rows)
-- case that breaks the old ph_may_need optimization
@@ -5482,21 +5482,21 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
lateral (select q1, coalesce(ss1.x,q2) as y from int8_tbl d) ss2
) on c.q2 = ss2.q1,
lateral (select * from int4_tbl i where ss2.y > f1) ss3;
- QUERY PLAN
----------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------
Nested Loop
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, i.f1
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, i.f1
Join Filter: ((COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)) > i.f1)
-> Hash Right Join
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
-> Nested Loop
- Output: a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
+ Project: a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
-> Hash Right Join
- Output: a.q1, a.q2, b.q1, (COALESCE(b.q2, (b2.f1)::bigint))
+ Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, (b2.f1)::bigint))
Hash Cond: (b.q1 = a.q2)
-> Nested Loop
- Output: b.q1, COALESCE(b.q2, (b2.f1)::bigint)
+ Project: b.q1, COALESCE(b.q2, (b2.f1)::bigint)
Join Filter: (b.q1 < b2.f1)
-> Seq Scan on public.int8_tbl b
Output: b.q1, b.q2
@@ -5509,7 +5509,7 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl d
- Output: d.q1, COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)
+ Project: d.q1, COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
-> Seq Scan on public.int8_tbl c
@@ -5530,16 +5530,16 @@ select * from
QUERY PLAN
----------------------------------------------
Nested Loop Left Join
- Output: (1), (2), (3)
+ Project: (1), (2), (3)
Join Filter: (((3) = (1)) AND ((3) = (2)))
-> Nested Loop
- Output: (1), (2)
+ Project: (1), (2)
-> Result
- Output: 1
+ Project: 1
-> Result
- Output: 2
+ Project: 2
-> Result
- Output: 3
+ Project: 3
(11 rows)
-- check dummy rels with lateral references (bug #15694)
@@ -5549,25 +5549,25 @@ select * from int8_tbl i8 left join lateral
QUERY PLAN
--------------------------------------
Nested Loop Left Join
- Output: i8.q1, i8.q2, f1, (i8.q2)
+ Project: i8.q1, i8.q2, f1, (i8.q2)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Result
- Output: f1, i8.q2
+ Project: f1, i8.q2
One-Time Filter: false
(7 rows)
explain (verbose, costs off)
select * from int8_tbl i8 left join lateral
(select *, i8.q2 from int4_tbl i1, int4_tbl i2 where false) ss on true;
- QUERY PLAN
------------------------------------------
+ QUERY PLAN
+------------------------------------------
Nested Loop Left Join
- Output: i8.q1, i8.q2, f1, f1, (i8.q2)
+ Project: i8.q1, i8.q2, f1, f1, (i8.q2)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Result
- Output: f1, f1, i8.q2
+ Project: f1, f1, i8.q2
One-Time Filter: false
(7 rows)
@@ -5600,18 +5600,18 @@ select * from
QUERY PLAN
----------------------------------------------------------------------
Nested Loop
- Output: "*VALUES*".column1, "*VALUES*".column2, int4_tbl.f1
+ Project: "*VALUES*".column1, "*VALUES*".column2, int4_tbl.f1
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1, "*VALUES*".column2
-> Nested Loop Semi Join
- Output: int4_tbl.f1
+ Project: int4_tbl.f1
Join Filter: (int4_tbl.f1 = tenk1.unique1)
-> Seq Scan on public.int4_tbl
Output: int4_tbl.f1
-> Materialize
Output: tenk1.unique1
-> Index Scan using tenk1_unique2 on public.tenk1
- Output: tenk1.unique1
+ Project: tenk1.unique1
Index Cond: (tenk1.unique2 = "*VALUES*".column2)
(14 rows)
@@ -5636,14 +5636,14 @@ lateral (select * from int8_tbl t1,
where q2 = (select greatest(t1.q1,t2.q2))
and (select v.id=0)) offset 0) ss2) ss
where t1.q1 = ss.q2) ss0;
- QUERY PLAN
------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------
Nested Loop
- Output: "*VALUES*".column1, t1.q1, t1.q2, ss2.q1, ss2.q2
+ Project: "*VALUES*".column1, t1.q1, t1.q2, ss2.q1, ss2.q2
-> Seq Scan on public.int8_tbl t1
Output: t1.q1, t1.q2
-> Nested Loop
- Output: "*VALUES*".column1, ss2.q1, ss2.q2
+ Project: "*VALUES*".column1, ss2.q1, ss2.q2
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1
-> Subquery Scan on ss2
@@ -5654,14 +5654,14 @@ lateral (select * from int8_tbl t1,
Filter: (SubPlan 3)
SubPlan 3
-> Result
- Output: t3.q2
+ Project: t3.q2
One-Time Filter: $4
InitPlan 1 (returns $2)
-> Result
- Output: GREATEST($0, t2.q2)
+ Project: GREATEST($0, t2.q2)
InitPlan 2 (returns $4)
-> Result
- Output: ($3 = 0)
+ Project: ($3 = 0)
-> Seq Scan on public.int8_tbl t3
Output: t3.q1, t3.q2
Filter: (t3.q2 = $2)
@@ -5785,11 +5785,11 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
Output: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a
Sort Key: t1.a
-> Nested Loop Left Join
- Output: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a
+ Project: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a
-> Seq Scan on public.join_ut1 t1
Output: t1.a, t1.b, t1.c
-> Hash Join
- Output: t2.a, LEAST(t1.a, t2.a, t3.a)
+ Project: t2.a, LEAST(t1.a, t2.a, t3.a)
Hash Cond: (t3.b = t2.a)
-> Seq Scan on public.join_ut1 t3
Output: t3.a, t3.b, t3.c
@@ -5797,10 +5797,10 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
Output: t2.a
-> Append
-> Seq Scan on public.join_pt1p1p1 t2
- Output: t2.a
+ Project: t2.a
Filter: (t1.a = t2.a)
-> Seq Scan on public.join_pt1p2 t2_1
- Output: t2_1.a
+ Project: t2_1.a
Filter: (t1.a = t2_1.a)
(21 rows)
@@ -5869,7 +5869,7 @@ select * from j1 inner join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -5886,7 +5886,7 @@ select * from j1 inner join j2 on j1.id > j2.id;
QUERY PLAN
-----------------------------------
Nested Loop
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Join Filter: (j1.id > j2.id)
-> Seq Scan on public.j1
Output: j1.id
@@ -5902,7 +5902,7 @@ select * from j1 inner join j3 on j1.id = j3.id;
QUERY PLAN
-----------------------------------
Hash Join
- Output: j1.id, j3.id
+ Project: j1.id, j3.id
Inner Unique: true
Hash Cond: (j3.id = j1.id)
-> Seq Scan on public.j3
@@ -5919,7 +5919,7 @@ select * from j1 left join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Left Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -5936,7 +5936,7 @@ select * from j1 right join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Left Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j2.id = j1.id)
-> Seq Scan on public.j2
@@ -5953,7 +5953,7 @@ select * from j1 full join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Full Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -5970,7 +5970,7 @@ select * from j1 cross join j2;
QUERY PLAN
-----------------------------------
Nested Loop
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
-> Seq Scan on public.j1
Output: j1.id
-> Materialize
@@ -5985,7 +5985,7 @@ select * from j1 natural join j2;
QUERY PLAN
-----------------------------------
Hash Join
- Output: j1.id
+ Project: j1.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -6003,7 +6003,7 @@ inner join (select distinct id from j3) j3 on j1.id = j3.id;
QUERY PLAN
-----------------------------------------
Nested Loop
- Output: j1.id, j3.id
+ Project: j1.id, j3.id
Inner Unique: true
Join Filter: (j1.id = j3.id)
-> Unique
@@ -6024,11 +6024,11 @@ inner join (select id from j3 group by id) j3 on j1.id = j3.id;
QUERY PLAN
-----------------------------------------
Nested Loop
- Output: j1.id, j3.id
+ Project: j1.id, j3.id
Inner Unique: true
Join Filter: (j1.id = j3.id)
-> Group
- Output: j3.id
+ Project: j3.id
Group Key: j3.id
-> Sort
Output: j3.id
@@ -6057,10 +6057,10 @@ analyze j3;
explain (verbose, costs off)
select * from j1
inner join j2 on j1.id1 = j2.id1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Join Filter: (j1.id1 = j2.id1)
-> Seq Scan on public.j2
Output: j2.id1, j2.id2
@@ -6075,7 +6075,7 @@ inner join j2 on j1.id1 = j2.id1 and j1.id2 = j2.id2;
QUERY PLAN
----------------------------------------------------------
Nested Loop
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Inner Unique: true
Join Filter: ((j1.id1 = j2.id1) AND (j1.id2 = j2.id2))
-> Seq Scan on public.j2
@@ -6089,10 +6089,10 @@ inner join j2 on j1.id1 = j2.id1 and j1.id2 = j2.id2;
explain (verbose, costs off)
select * from j1
inner join j2 on j1.id1 = j2.id1 where j1.id2 = 1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Join Filter: (j1.id1 = j2.id1)
-> Seq Scan on public.j1
Output: j1.id1, j1.id2
@@ -6105,10 +6105,10 @@ inner join j2 on j1.id1 = j2.id1 where j1.id2 = 1;
explain (verbose, costs off)
select * from j1
left join j2 on j1.id1 = j2.id1 where j1.id2 = 1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop Left Join
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Join Filter: (j1.id1 = j2.id1)
-> Seq Scan on public.j1
Output: j1.id1, j1.id2
@@ -6166,12 +6166,12 @@ where exists (select 1 from tenk1 t3
QUERY PLAN
---------------------------------------------------------------------------------
Nested Loop
- Output: t1.unique1, t2.hundred
+ Project: t1.unique1, t2.hundred
-> Hash Join
- Output: t1.unique1, t3.tenthous
+ Project: t1.unique1, t3.tenthous
Hash Cond: (t3.thousand = t1.unique1)
-> HashAggregate
- Output: t3.thousand, t3.tenthous
+ Project: t3.thousand, t3.tenthous
Group Key: t3.thousand, t3.tenthous
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1 t3
Output: t3.thousand, t3.tenthous
@@ -6198,9 +6198,9 @@ where exists (select 1 from j3
QUERY PLAN
------------------------------------------------------------------------
Nested Loop
- Output: t1.unique1, t2.hundred
+ Project: t1.unique1, t2.hundred
-> Nested Loop
- Output: t1.unique1, j3.tenthous
+ Project: t1.unique1, j3.tenthous
-> Index Only Scan using onek_unique1 on public.onek t1
Output: t1.unique1
Index Cond: (t1.unique1 < 1)
diff --git a/src/test/regress/expected/join_hash.out b/src/test/regress/expected/join_hash.out
index 3a91c144a27..4e405ebbd76 100644
--- a/src/test/regress/expected/join_hash.out
+++ b/src/test/regress/expected/join_hash.out
@@ -913,36 +913,36 @@ WHERE
AND (SELECT hjtest_1.b * 5) < 50
AND (SELECT hjtest_2.c * 5) < 55
AND hjtest_1.a <> hjtest_2.b;
- QUERY PLAN
-------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------
Hash Join
- Output: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
+ Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: ((hjtest_1.id = (SubPlan 1)) AND ((SubPlan 2) = (SubPlan 3)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
-> Seq Scan on public.hjtest_1
- Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
+ Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
SubPlan 4
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
-> Hash
Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
-> Seq Scan on public.hjtest_2
- Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
+ Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
SubPlan 5
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
SubPlan 1
-> Result
- Output: 1
+ Project: 1
One-Time Filter: (hjtest_2.id = 1)
SubPlan 3
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
SubPlan 2
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
(28 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
@@ -967,36 +967,36 @@ WHERE
AND (SELECT hjtest_1.b * 5) < 50
AND (SELECT hjtest_2.c * 5) < 55
AND hjtest_1.a <> hjtest_2.b;
- QUERY PLAN
-------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------
Hash Join
- Output: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
+ Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: (((SubPlan 1) = hjtest_1.id) AND ((SubPlan 3) = (SubPlan 2)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
-> Seq Scan on public.hjtest_2
- Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
+ Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
SubPlan 5
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
-> Hash
Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
-> Seq Scan on public.hjtest_1
- Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
+ Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
SubPlan 4
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
SubPlan 2
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
SubPlan 1
-> Result
- Output: 1
+ Project: 1
One-Time Filter: (hjtest_2.id = 1)
SubPlan 3
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
(28 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index c18f547cbd3..5b247e74b77 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -316,12 +316,12 @@ create temp sequence testseq;
explain (verbose, costs off)
select unique1, unique2, nextval('testseq')
from tenk1 order by unique2 limit 10;
- QUERY PLAN
-----------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------
Limit
Output: unique1, unique2, (nextval('testseq'::regclass))
-> Index Scan using tenk1_unique2 on public.tenk1
- Output: unique1, unique2, nextval('testseq'::regclass)
+ Project: unique1, unique2, nextval('testseq'::regclass)
(4 rows)
select unique1, unique2, nextval('testseq')
@@ -349,17 +349,17 @@ select currval('testseq');
explain (verbose, costs off)
select unique1, unique2, nextval('testseq')
from tenk1 order by tenthous limit 10;
- QUERY PLAN
---------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Limit
Output: unique1, unique2, (nextval('testseq'::regclass)), tenthous
-> Result
- Output: unique1, unique2, nextval('testseq'::regclass), tenthous
+ Project: unique1, unique2, nextval('testseq'::regclass), tenthous
-> Sort
Output: unique1, unique2, tenthous
Sort Key: tenk1.tenthous
-> Seq Scan on public.tenk1
- Output: unique1, unique2, tenthous
+ Project: unique1, unique2, tenthous
(9 rows)
select unique1, unique2, nextval('testseq')
@@ -423,7 +423,7 @@ select unique1, unique2, generate_series(1,10)
Output: unique1, unique2, tenthous
Sort Key: tenk1.tenthous
-> Seq Scan on public.tenk1
- Output: unique1, unique2, tenthous
+ Project: unique1, unique2, tenthous
(9 rows)
select unique1, unique2, generate_series(1,10)
@@ -483,12 +483,12 @@ order by s2 desc;
explain (verbose, costs off)
select sum(tenthous) as s1, sum(tenthous) + random()*0 as s2
from tenk1 group by thousand order by thousand limit 3;
- QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------------------------------------------------------
Limit
Output: (sum(tenthous)), (((sum(tenthous))::double precision + (random() * '0'::double precision))), thousand
-> GroupAggregate
- Output: sum(tenthous), ((sum(tenthous))::double precision + (random() * '0'::double precision)), thousand
+ Project: sum(tenthous), ((sum(tenthous))::double precision + (random() * '0'::double precision)), thousand
Group Key: tenk1.thousand
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1
Output: thousand, tenthous
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
index e85b29455e5..92421090755 100644
--- a/src/test/regress/expected/plpgsql.out
+++ b/src/test/regress/expected/plpgsql.out
@@ -4832,9 +4832,9 @@ select i, a from
QUERY PLAN
-----------------------------------------------------------------
Nested Loop
- Output: i.i, (returns_rw_array(1))
+ Project: i.i, (returns_rw_array(1))
-> Result
- Output: returns_rw_array(1)
+ Project: returns_rw_array(1)
-> Function Scan on public.consumes_rw_array i
Output: i.i
Function Call: consumes_rw_array((returns_rw_array(1)))
@@ -4853,7 +4853,7 @@ select consumes_rw_array(a), a from returns_rw_array(1) a;
QUERY PLAN
--------------------------------------------
Function Scan on public.returns_rw_array a
- Output: consumes_rw_array(a), a
+ Project: consumes_rw_array(a), a
Function Call: returns_rw_array(1)
(3 rows)
@@ -4866,10 +4866,10 @@ select consumes_rw_array(a), a from returns_rw_array(1) a;
explain (verbose, costs off)
select consumes_rw_array(a), a from
(values (returns_rw_array(1)), (returns_rw_array(2))) v(a);
- QUERY PLAN
----------------------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------------------
Values Scan on "*VALUES*"
- Output: consumes_rw_array("*VALUES*".column1), "*VALUES*".column1
+ Project: consumes_rw_array("*VALUES*".column1), "*VALUES*".column1
(2 rows)
select consumes_rw_array(a), a from
@@ -5207,7 +5207,7 @@ UPDATE transition_table_base
SET val = '*' || val || '*'
WHERE id BETWEEN 2 AND 3;
INFO: Hash Full Join
- Output: COALESCE(ot.id, nt.id), ot.val, nt.val
+ Project: COALESCE(ot.id, nt.id), ot.val, nt.val
Hash Cond: (ot.id = nt.id)
-> Named Tuplestore Scan
Output: ot.id, ot.val
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index 36a59291139..175e568ab40 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2003,7 +2003,7 @@ select x from int8_tbl, extractq2(int8_tbl) f(x);
QUERY PLAN
------------------------------------------
Nested Loop
- Output: f.x
+ Project: f.x
-> Seq Scan on public.int8_tbl
Output: int8_tbl.q1, int8_tbl.q2
-> Function Scan on f
@@ -2029,11 +2029,11 @@ select x from int8_tbl, extractq2_2(int8_tbl) f(x);
QUERY PLAN
-----------------------------------
Nested Loop
- Output: ((int8_tbl.*).q2)
+ Project: ((int8_tbl.*).q2)
-> Seq Scan on public.int8_tbl
- Output: int8_tbl.*
+ Project: int8_tbl.*
-> Result
- Output: (int8_tbl.*).q2
+ Project: (int8_tbl.*).q2
(6 rows)
select x from int8_tbl, extractq2_2(int8_tbl) f(x);
@@ -2055,7 +2055,7 @@ select x from int8_tbl, extractq2_2_opt(int8_tbl) f(x);
QUERY PLAN
-----------------------------
Seq Scan on public.int8_tbl
- Output: int8_tbl.q2
+ Project: int8_tbl.q2
(2 rows)
select x from int8_tbl, extractq2_2_opt(int8_tbl) f(x);
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index d01769299e4..e2ae42f78ac 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -3972,12 +3972,12 @@ INSERT INTO rls_tbl
--------------------------------------------------------------------
Insert on regress_rls_schema.rls_tbl
-> Subquery Scan on ss
- Output: ss.b, ss.c, NULL::integer
+ Project: ss.b, ss.c, NULL::integer
-> Sort
Output: rls_tbl_1.b, rls_tbl_1.c, rls_tbl_1.a
Sort Key: rls_tbl_1.a
-> Seq Scan on regress_rls_schema.rls_tbl rls_tbl_1
- Output: rls_tbl_1.b, rls_tbl_1.c, rls_tbl_1.a
+ Project: rls_tbl_1.b, rls_tbl_1.c, rls_tbl_1.a
Filter: (rls_tbl_1.* >= '(1,1,1)'::record)
(9 rows)
diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out
index a272305eb55..ecd9a0f6ec0 100644
--- a/src/test/regress/expected/rowtypes.out
+++ b/src/test/regress/expected/rowtypes.out
@@ -1127,10 +1127,10 @@ explain (verbose, costs off)
select r, r is null as isnull, r is not null as isnotnull
from (values (1,row(1,2)), (1,row(null,null)), (1,null),
(null,row(1,2)), (null,row(null,null)), (null,null) ) r(a,b);
- QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Values Scan on "*VALUES*"
- Output: ROW("*VALUES*".column1, "*VALUES*".column2), (("*VALUES*".column1 IS NULL) AND ("*VALUES*".column2 IS NOT DISTINCT FROM NULL)), (("*VALUES*".column1 IS NOT NULL) AND ("*VALUES*".column2 IS DISTINCT FROM NULL))
+ Project: ROW("*VALUES*".column1, "*VALUES*".column2), (("*VALUES*".column1 IS NULL) AND ("*VALUES*".column2 IS NOT DISTINCT FROM NULL)), (("*VALUES*".column1 IS NOT NULL) AND ("*VALUES*".column2 IS DISTINCT FROM NULL))
(2 rows)
select r, r is null as isnull, r is not null as isnotnull
@@ -1154,7 +1154,7 @@ select r, r is null as isnull, r is not null as isnotnull from r;
QUERY PLAN
----------------------------------------------------------
CTE Scan on r
- Output: r.*, (r.* IS NULL), (r.* IS NOT NULL)
+ Project: r.*, (r.* IS NULL), (r.* IS NOT NULL)
CTE r
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1, "*VALUES*".column2
diff --git a/src/test/regress/expected/select_distinct.out b/src/test/regress/expected/select_distinct.out
index f3696c6d1de..fc93b33ee2b 100644
--- a/src/test/regress/expected/select_distinct.out
+++ b/src/test/regress/expected/select_distinct.out
@@ -130,15 +130,15 @@ SELECT DISTINCT p.age FROM person* p ORDER BY age using >;
EXPLAIN (VERBOSE, COSTS OFF)
SELECT count(*) FROM
(SELECT DISTINCT two, four, two FROM tenk1) ss;
- QUERY PLAN
---------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------
Aggregate
- Output: count(*)
+ Project: count(*)
-> HashAggregate
- Output: tenk1.two, tenk1.four, tenk1.two
+ Project: tenk1.two, tenk1.four, tenk1.two
Group Key: tenk1.two, tenk1.four, tenk1.two
-> Seq Scan on public.tenk1
- Output: tenk1.two, tenk1.four, tenk1.two
+ Project: tenk1.two, tenk1.four, tenk1.two
(7 rows)
SELECT count(*) FROM
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 04aecef0123..b5a7211fca0 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -226,10 +226,10 @@ select sp_parallel_restricted(unique1) from tenk1
Output: (sp_parallel_restricted(unique1))
Sort Key: (sp_parallel_restricted(tenk1.unique1))
-> Gather
- Output: sp_parallel_restricted(unique1)
+ Project: sp_parallel_restricted(unique1)
Workers Planned: 4
-> Parallel Seq Scan on public.tenk1
- Output: unique1
+ Project: unique1
Filter: (tenk1.stringu1 = 'GRAAAA'::name)
(9 rows)
@@ -663,12 +663,12 @@ explain (costs off, verbose)
Output: ten, (sp_simple_func(ten))
Workers Planned: 4
-> Result
- Output: ten, sp_simple_func(ten)
+ Project: ten, sp_simple_func(ten)
-> Sort
Output: ten
Sort Key: tenk1.ten
-> Parallel Seq Scan on public.tenk1
- Output: ten
+ Project: ten
Filter: (tenk1.ten < 100)
(11 rows)
@@ -979,18 +979,18 @@ explain (costs off, verbose)
QUERY PLAN
----------------------------------------------------------------------------------------------
Aggregate
- Output: count(*)
+ Project: count(*)
-> Hash Semi Join
Hash Cond: ((a.unique1 = b.unique1) AND (a.two = (row_number() OVER (?))))
-> Gather
Output: a.unique1, a.two
Workers Planned: 4
-> Parallel Seq Scan on public.tenk1 a
- Output: a.unique1, a.two
+ Project: a.unique1, a.two
-> Hash
Output: b.unique1, (row_number() OVER (?))
-> WindowAgg
- Output: b.unique1, row_number() OVER (?)
+ Project: b.unique1, row_number() OVER (?)
-> Gather
Output: b.unique1
Workers Planned: 4
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index ee9c5db0d51..90fe9fe9802 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -233,23 +233,23 @@ SELECT *, pg_typeof(f1) FROM
-- ... unless there's context to suggest differently
explain (verbose, costs off) select '42' union all select '43';
- QUERY PLAN
-----------------------------
+ QUERY PLAN
+-----------------------------
Append
-> Result
- Output: '42'::text
+ Project: '42'::text
-> Result
- Output: '43'::text
+ Project: '43'::text
(5 rows)
explain (verbose, costs off) select '42' union all select 43;
- QUERY PLAN
---------------------
+ QUERY PLAN
+---------------------
Append
-> Result
- Output: 42
+ Project: 42
-> Result
- Output: 43
+ Project: 43
(5 rows)
-- check materialization of an initplan reference (bug #14524)
@@ -258,15 +258,15 @@ select 1 = all (select (select 1));
QUERY PLAN
-----------------------------------
Result
- Output: (SubPlan 2)
+ Project: (SubPlan 2)
SubPlan 2
-> Materialize
Output: ($0)
InitPlan 1 (returns $0)
-> Result
- Output: 1
+ Project: 1
-> Result
- Output: $0
+ Project: $0
(10 rows)
select 1 = all (select (select 1));
@@ -770,16 +770,16 @@ select * from outer_text where (f1, f2) not in (select * from inner_text);
--
explain (verbose, costs off)
select 'foo'::text in (select 'bar'::name union all select 'bar'::name);
- QUERY PLAN
--------------------------------------
+ QUERY PLAN
+--------------------------------------
Result
- Output: (hashed SubPlan 1)
+ Project: (hashed SubPlan 1)
SubPlan 1
-> Append
-> Result
- Output: 'bar'::name
+ Project: 'bar'::name
-> Result
- Output: 'bar'::name
+ Project: 'bar'::name
(8 rows)
select 'foo'::text in (select 'bar'::name union all select 'bar'::name);
@@ -818,27 +818,27 @@ explain (verbose, costs off)
QUERY PLAN
---------------------------
Values Scan on "*VALUES*"
- Output: $0, $1
+ Project: $0, $1
InitPlan 1 (returns $0)
-> Result
- Output: now()
+ Project: now()
InitPlan 2 (returns $1)
-> Result
- Output: now()
+ Project: now()
(8 rows)
explain (verbose, costs off)
select x, x from
(select (select random()) as x from (values(1),(2)) v(y)) ss;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+-----------------------------------
Subquery Scan on ss
- Output: ss.x, ss.x
+ Project: ss.x, ss.x
-> Values Scan on "*VALUES*"
- Output: $0
+ Project: $0
InitPlan 1 (returns $0)
-> Result
- Output: random()
+ Project: random()
(7 rows)
explain (verbose, costs off)
@@ -847,14 +847,14 @@ explain (verbose, costs off)
QUERY PLAN
----------------------------------------------------------------------
Values Scan on "*VALUES*"
- Output: (SubPlan 1), (SubPlan 2)
+ Project: (SubPlan 1), (SubPlan 2)
SubPlan 1
-> Result
- Output: now()
+ Project: now()
One-Time Filter: ("*VALUES*".column1 = "*VALUES*".column1)
SubPlan 2
-> Result
- Output: now()
+ Project: now()
One-Time Filter: ("*VALUES*".column1 = "*VALUES*".column1)
(10 rows)
@@ -864,12 +864,12 @@ explain (verbose, costs off)
QUERY PLAN
----------------------------------------------------------------------------
Subquery Scan on ss
- Output: ss.x, ss.x
+ Project: ss.x, ss.x
-> Values Scan on "*VALUES*"
- Output: (SubPlan 1)
+ Project: (SubPlan 1)
SubPlan 1
-> Result
- Output: random()
+ Project: random()
One-Time Filter: ("*VALUES*".column1 = "*VALUES*".column1)
(8 rows)
@@ -936,7 +936,7 @@ select * from int4_tbl where
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop Semi Join
- Output: int4_tbl.f1
+ Project: int4_tbl.f1
Join Filter: (CASE WHEN (hashed SubPlan 1) THEN int4_tbl.f1 ELSE NULL::integer END = b.ten)
-> Seq Scan on public.int4_tbl
Output: int4_tbl.f1
@@ -961,10 +961,10 @@ select * from int4_tbl where
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,50) / 10 g from int4_tbl i group by f1);
- QUERY PLAN
--------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------
Nested Loop Semi Join
- Output: o.f1
+ Project: o.f1
Join Filter: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
@@ -974,11 +974,11 @@ select * from int4_tbl o where (f1, f1) in
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
-> Result
- Output: i.f1, ((generate_series(1, 50)) / 10)
+ Project: i.f1, ((generate_series(1, 50)) / 10)
-> ProjectSet
Output: generate_series(1, 50), i.f1
-> HashAggregate
- Output: i.f1
+ Project: i.f1
Group Key: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
@@ -1220,7 +1220,7 @@ select * from x where f1 = 1;
QUERY PLAN
----------------------------------
Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1
+ Project: subselect_tbl.f1
Filter: (subselect_tbl.f1 = 1)
(3 rows)
@@ -1235,17 +1235,17 @@ select * from x where f1 = 1;
Filter: (x.f1 = 1)
CTE x
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1
+ Project: subselect_tbl.f1
(6 rows)
-- Stable functions are safe to inline
explain (verbose, costs off)
with x as (select * from (select f1, now() from subselect_tbl) ss)
select * from x where f1 = 1;
- QUERY PLAN
------------------------------------
+ QUERY PLAN
+------------------------------------
Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, now()
+ Project: subselect_tbl.f1, now()
Filter: (subselect_tbl.f1 = 1)
(3 rows)
@@ -1253,46 +1253,46 @@ select * from x where f1 = 1;
explain (verbose, costs off)
with x as (select * from (select f1, random() from subselect_tbl) ss)
select * from x where f1 = 1;
- QUERY PLAN
-----------------------------------------------
+ QUERY PLAN
+-----------------------------------------------
CTE Scan on x
Output: x.f1, x.random
Filter: (x.f1 = 1)
CTE x
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, random()
+ Project: subselect_tbl.f1, random()
(6 rows)
-- SELECT FOR UPDATE cannot be inlined
explain (verbose, costs off)
with x as (select * from (select f1 from subselect_tbl for update) ss)
select * from x where f1 = 1;
- QUERY PLAN
---------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
CTE Scan on x
Output: x.f1
Filter: (x.f1 = 1)
CTE x
-> Subquery Scan on ss
- Output: ss.f1
+ Project: ss.f1
-> LockRows
Output: subselect_tbl.f1, subselect_tbl.ctid
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, subselect_tbl.ctid
+ Project: subselect_tbl.f1, subselect_tbl.ctid
(10 rows)
-- Multiply-referenced CTEs are inlined only when requested
explain (verbose, costs off)
with x as (select * from (select f1, now() as n from subselect_tbl) ss)
select * from x, x x2 where x.n = x2.n;
- QUERY PLAN
--------------------------------------------
+ QUERY PLAN
+--------------------------------------------
Merge Join
- Output: x.f1, x.n, x2.f1, x2.n
+ Project: x.f1, x.n, x2.f1, x2.n
Merge Cond: (x.n = x2.n)
CTE x
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, now()
+ Project: subselect_tbl.f1, now()
-> Sort
Output: x.f1, x.n
Sort Key: x.n
@@ -1311,16 +1311,16 @@ select * from x, x x2 where x.n = x2.n;
QUERY PLAN
----------------------------------------------------------------------------
Result
- Output: subselect_tbl.f1, now(), subselect_tbl_1.f1, now()
+ Project: subselect_tbl.f1, now(), subselect_tbl_1.f1, now()
One-Time Filter: (now() = now())
-> Nested Loop
- Output: subselect_tbl.f1, subselect_tbl_1.f1
+ Project: subselect_tbl.f1, subselect_tbl_1.f1
-> Seq Scan on public.subselect_tbl
Output: subselect_tbl.f1, subselect_tbl.f2, subselect_tbl.f3
-> Materialize
Output: subselect_tbl_1.f1
-> Seq Scan on public.subselect_tbl subselect_tbl_1
- Output: subselect_tbl_1.f1
+ Project: subselect_tbl_1.f1
(11 rows)
-- Multiply-referenced CTEs can't be inlined if they contain outer self-refs
@@ -1341,7 +1341,7 @@ select * from x;
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1
-> Nested Loop
- Output: (z.a || z1.a)
+ Project: (z.a || z1.a)
Join Filter: (length((z.a || z1.a)) < 5)
CTE z
-> WorkTable Scan on x x_1
@@ -1450,10 +1450,10 @@ select * from (with y as (select * from x) select * from y) ss;
explain (verbose, costs off)
with x as (select 1 as y)
select * from (with x as (select 2 as y) select * from x) ss;
- QUERY PLAN
--------------
+ QUERY PLAN
+--------------
Result
- Output: 2
+ Project: 2
(2 rows)
-- Row marks are not pushed into CTEs
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index d47b5f6ec57..9fd94e24664 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -103,10 +103,10 @@ SELECT unnest(ARRAY[1, 2]) FROM few WHERE false;
explain (verbose, costs off)
SELECT * FROM few f1,
(SELECT unnest(ARRAY[1,2]) FROM few f2 WHERE false OFFSET 0) ss;
- QUERY PLAN
-------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------
Result
- Output: f1.id, f1.dataa, f1.datab, ss.unnest
+ Project: f1.id, f1.dataa, f1.datab, ss.unnest
One-Time Filter: false
(3 rows)
@@ -647,10 +647,10 @@ SELECT |@|ARRAY[1,2,3];
-- Some fun cases involving duplicate SRF calls
explain (verbose, costs off)
select generate_series(1,3) as x, generate_series(1,3) + 1 as xp1;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Result
- Output: (generate_series(1, 3)), ((generate_series(1, 3)) + 1)
+ Project: (generate_series(1, 3)), ((generate_series(1, 3)) + 1)
-> ProjectSet
Output: generate_series(1, 3)
-> Result
@@ -666,13 +666,13 @@ select generate_series(1,3) as x, generate_series(1,3) + 1 as xp1;
explain (verbose, costs off)
select generate_series(1,3)+1 order by generate_series(1,3);
- QUERY PLAN
-------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------
Sort
Output: (((generate_series(1, 3)) + 1)), (generate_series(1, 3))
Sort Key: (generate_series(1, 3))
-> Result
- Output: ((generate_series(1, 3)) + 1), (generate_series(1, 3))
+ Project: ((generate_series(1, 3)) + 1), (generate_series(1, 3))
-> ProjectSet
Output: generate_series(1, 3)
-> Result
@@ -689,10 +689,10 @@ select generate_series(1,3)+1 order by generate_series(1,3);
-- Check that SRFs of same nesting level run in lockstep
explain (verbose, costs off)
select generate_series(1,3) as x, generate_series(3,6) + 1 as y;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Result
- Output: (generate_series(1, 3)), ((generate_series(3, 6)) + 1)
+ Project: (generate_series(1, 3)), ((generate_series(3, 6)) + 1)
-> ProjectSet
Output: generate_series(1, 3), generate_series(3, 6)
-> Result
diff --git a/src/test/regress/expected/updatable_views.out b/src/test/regress/expected/updatable_views.out
index 8443c24f18b..1060e31408b 100644
--- a/src/test/regress/expected/updatable_views.out
+++ b/src/test/regress/expected/updatable_views.out
@@ -1262,12 +1262,12 @@ SELECT * FROM rw_view1;
(4 rows)
EXPLAIN (verbose, costs off) UPDATE rw_view1 SET b = b + 1 RETURNING *;
- QUERY PLAN
--------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------
Update on public.base_tbl
Output: base_tbl.a, base_tbl.b
-> Seq Scan on public.base_tbl
- Output: base_tbl.a, (base_tbl.b + 1), base_tbl.ctid
+ Project: base_tbl.a, (base_tbl.b + 1), base_tbl.ctid
(4 rows)
UPDATE rw_view1 SET b = b + 1 RETURNING *;
@@ -2288,7 +2288,7 @@ UPDATE v1 SET a=100 WHERE snoop(a) AND leakproof(a) AND a < 7 AND a != 6;
Update on public.t12
Update on public.t111
-> Index Scan using t1_a_idx on public.t1
- Output: 100, t1.b, t1.c, t1.ctid
+ Project: 100, t1.b, t1.c, t1.ctid
Index Cond: ((t1.a > 5) AND (t1.a < 7))
Filter: ((t1.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t1.a) AND leakproof(t1.a))
SubPlan 1
@@ -2300,19 +2300,19 @@ UPDATE v1 SET a=100 WHERE snoop(a) AND leakproof(a) AND a < 7 AND a != 6;
SubPlan 2
-> Append
-> Seq Scan on public.t12 t12_2
- Output: t12_2.a
+ Project: t12_2.a
-> Seq Scan on public.t111 t111_2
- Output: t111_2.a
+ Project: t111_2.a
-> Index Scan using t11_a_idx on public.t11
- Output: 100, t11.b, t11.c, t11.d, t11.ctid
+ Project: 100, t11.b, t11.c, t11.d, t11.ctid
Index Cond: ((t11.a > 5) AND (t11.a < 7))
Filter: ((t11.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t11.a) AND leakproof(t11.a))
-> Index Scan using t12_a_idx on public.t12
- Output: 100, t12.b, t12.c, t12.e, t12.ctid
+ Project: 100, t12.b, t12.c, t12.e, t12.ctid
Index Cond: ((t12.a > 5) AND (t12.a < 7))
Filter: ((t12.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t12.a) AND leakproof(t12.a))
-> Index Scan using t111_a_idx on public.t111
- Output: 100, t111.b, t111.c, t111.d, t111.e, t111.ctid
+ Project: 100, t111.b, t111.c, t111.d, t111.e, t111.ctid
Index Cond: ((t111.a > 5) AND (t111.a < 7))
Filter: ((t111.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t111.a) AND leakproof(t111.a))
(33 rows)
@@ -2338,7 +2338,7 @@ UPDATE v1 SET a=a+1 WHERE snoop(a) AND leakproof(a) AND a = 8;
Update on public.t12
Update on public.t111
-> Index Scan using t1_a_idx on public.t1
- Output: (t1.a + 1), t1.b, t1.c, t1.ctid
+ Project: (t1.a + 1), t1.b, t1.c, t1.ctid
Index Cond: ((t1.a > 5) AND (t1.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t1.a) AND leakproof(t1.a))
SubPlan 1
@@ -2350,19 +2350,19 @@ UPDATE v1 SET a=a+1 WHERE snoop(a) AND leakproof(a) AND a = 8;
SubPlan 2
-> Append
-> Seq Scan on public.t12 t12_2
- Output: t12_2.a
+ Project: t12_2.a
-> Seq Scan on public.t111 t111_2
- Output: t111_2.a
+ Project: t111_2.a
-> Index Scan using t11_a_idx on public.t11
- Output: (t11.a + 1), t11.b, t11.c, t11.d, t11.ctid
+ Project: (t11.a + 1), t11.b, t11.c, t11.d, t11.ctid
Index Cond: ((t11.a > 5) AND (t11.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t11.a) AND leakproof(t11.a))
-> Index Scan using t12_a_idx on public.t12
- Output: (t12.a + 1), t12.b, t12.c, t12.e, t12.ctid
+ Project: (t12.a + 1), t12.b, t12.c, t12.e, t12.ctid
Index Cond: ((t12.a > 5) AND (t12.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t12.a) AND leakproof(t12.a))
-> Index Scan using t111_a_idx on public.t111
- Output: (t111.a + 1), t111.b, t111.c, t111.d, t111.e, t111.ctid
+ Project: (t111.a + 1), t111.b, t111.c, t111.d, t111.e, t111.ctid
Index Cond: ((t111.a > 5) AND (t111.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t111.a) AND leakproof(t111.a))
(33 rows)
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index a24ecd61df8..59419ec692d 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -172,17 +172,17 @@ EXPLAIN (VERBOSE, COSTS OFF)
UPDATE update_test t
SET (a, b) = (SELECT b, a FROM update_test s WHERE s.a = t.a)
WHERE CURRENT_USER = SESSION_USER;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Update on public.update_test t
-> Result
- Output: $1, $2, t.c, (SubPlan 1 (returns $1,$2)), t.ctid
+ Project: $1, $2, t.c, (SubPlan 1 (returns $1,$2)), t.ctid
One-Time Filter: (CURRENT_USER = SESSION_USER)
-> Seq Scan on public.update_test t
- Output: t.c, t.a, t.ctid
+ Project: t.c, t.a, t.ctid
SubPlan 1 (returns $1,$2)
-> Seq Scan on public.update_test s
- Output: s.b, s.a
+ Project: s.b, s.a
Filter: (s.a = t.a)
(10 rows)
diff --git a/src/test/regress/expected/with.out b/src/test/regress/expected/with.out
index 2a2085556bb..05d2847dc84 100644
--- a/src/test/regress/expected/with.out
+++ b/src/test/regress/expected/with.out
@@ -2181,8 +2181,8 @@ SELECT * FROM parent;
EXPLAIN (VERBOSE, COSTS OFF)
WITH wcte AS ( INSERT INTO int8_tbl VALUES ( 42, 47 ) RETURNING q2 )
DELETE FROM a USING wcte WHERE aa = q2;
- QUERY PLAN
-----------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------
Delete on public.a
Delete on public.a
Delete on public.b
@@ -2192,35 +2192,35 @@ DELETE FROM a USING wcte WHERE aa = q2;
-> Insert on public.int8_tbl
Output: int8_tbl.q2
-> Result
- Output: '42'::bigint, '47'::bigint
+ Project: '42'::bigint, '47'::bigint
-> Nested Loop
- Output: a.ctid, wcte.*
+ Project: a.ctid, wcte.*
Join Filter: (a.aa = wcte.q2)
-> Seq Scan on public.a
- Output: a.ctid, a.aa
+ Project: a.ctid, a.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
-> Nested Loop
- Output: b.ctid, wcte.*
+ Project: b.ctid, wcte.*
Join Filter: (b.aa = wcte.q2)
-> Seq Scan on public.b
- Output: b.ctid, b.aa
+ Project: b.ctid, b.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
-> Nested Loop
- Output: c.ctid, wcte.*
+ Project: c.ctid, wcte.*
Join Filter: (c.aa = wcte.q2)
-> Seq Scan on public.c
- Output: c.ctid, c.aa
+ Project: c.ctid, c.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
-> Nested Loop
- Output: d.ctid, wcte.*
+ Project: d.ctid, wcte.*
Join Filter: (d.aa = wcte.q2)
-> Seq Scan on public.d
- Output: d.ctid, d.aa
+ Project: d.ctid, d.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
(38 rows)
-- error cases
diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out
index 11e7d7faf37..9988e123166 100644
--- a/src/test/regress/expected/xml.out
+++ b/src/test/regress/expected/xml.out
@@ -1137,7 +1137,7 @@ EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM xmltableview1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1305,7 +1305,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1325,7 +1325,7 @@ SELECT xmltable.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME="Japan"
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
+ Project: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1428,7 +1428,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
diff --git a/src/test/regress/expected/xml_2.out b/src/test/regress/expected/xml_2.out
index 4d200274691..ebf992173f1 100644
--- a/src/test/regress/expected/xml_2.out
+++ b/src/test/regress/expected/xml_2.out
@@ -1117,7 +1117,7 @@ EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM xmltableview1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1285,7 +1285,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1305,7 +1305,7 @@ SELECT xmltable.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME="Japan"
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
+ Project: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1408,7 +1408,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
--
2.23.0.162.gf1d4a28250
v1-0004-Add-EXPLAIN-option-jit_details-showing-per-expres.patchtext/x-diff; charset=us-asciiDownload
From ce2b22cb4b1cb4d6ceaaeb8d2be29cf5f31af476 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 12:42:11 -0700
Subject: [PATCH v1 04/12] Add EXPLAIN option jit_details showing
per-expression information about JIT.
This is useful both to understand where JIT is applied (and thus where
to improve), and to be able write regression tests to verify that we
can JIT compile specific parts of a query.
Note that currently the printed function names will make it harder to
use this for regression tests - a followup commit will improve that
angle.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 144 ++++++++++++++++++++++++++--
src/backend/executor/execExpr.c | 8 ++
src/backend/jit/llvm/llvmjit_expr.c | 9 ++
src/include/commands/explain.h | 1 +
src/include/executor/execExpr.h | 6 ++
src/include/nodes/execnodes.h | 5 +
6 files changed, 164 insertions(+), 9 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index ea6b39d5abb..3ccb76bdfd1 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -19,6 +19,7 @@
#include "commands/defrem.h"
#include "commands/prepare.h"
#include "executor/nodeHash.h"
+#include "executor/execExpr.h"
#include "foreign/fdwapi.h"
#include "jit/jit.h"
#include "nodes/extensible.h"
@@ -69,6 +70,7 @@ static void show_plan_tlist(PlanState *planstate, List *ancestors,
static void show_expression(Node *node, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
+static void show_jit_expr_details(ExprState *expr, ExplainState *es);
static void show_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
@@ -170,6 +172,8 @@ ExplainQuery(ParseState *pstate, ExplainStmt *stmt, const char *queryString,
timing_set = true;
es->timing = defGetBoolean(opt);
}
+ else if (strcmp(opt->defname, "jit_details") == 0)
+ es->jit_details = defGetBoolean(opt);
else if (strcmp(opt->defname, "summary") == 0)
{
summary_set = true;
@@ -560,12 +564,11 @@ ExplainOnePlan(PlannedStmt *plannedstmt, IntoClause *into, ExplainState *es,
ExplainPrintTriggers(es, queryDesc);
/*
- * Print info about JITing. Tied to es->costs because we don't want to
- * display this in regression tests, as it'd cause output differences
- * depending on build options. Might want to separate that out from COSTS
- * at a later stage.
+ * Print info about JITing. Tied to es->costs unless jit_details is set,
+ * because we don't want to display this in regression tests, as it'd
+ * cause output differences depending on build options.
*/
- if (es->costs)
+ if (es->costs || es->jit_details)
ExplainPrintJITSummary(es, queryDesc);
/*
@@ -2140,10 +2143,40 @@ show_plan_tlist(PlanState *planstate, List *ancestors, ExplainState *es)
}
/* Print results */
- if (planstate->ps_ProjInfo)
+ if (!planstate->ps_ProjInfo)
+ ExplainPropertyList("Output", result, es);
+ else if (!es->jit_details)
ExplainPropertyList("Project", result, es);
+ else if (es->format != EXPLAIN_FORMAT_TEXT)
+ {
+ ExplainOpenGroup("Project", "Project", true, es);
+
+ ExplainPropertyList("Expr", result, es);
+
+ if (planstate->ps_ProjInfo)
+ {
+ ExprState *expr = &planstate->ps_ProjInfo->pi_state;
+
+ show_jit_expr_details(expr, es);
+ }
+ ExplainCloseGroup("Project", "Project", true, es);
+ }
else
- ExplainPropertyList("Output", result, es);
+ {
+ ExplainPropertyList("Project", result, es);
+
+ if (planstate->ps_ProjInfo)
+ {
+ ExprState *expr = &planstate->ps_ProjInfo->pi_state;
+
+ /* XXX: remove \n, probably instead just open-code ExplainPropertyList */
+ es->str->len--;
+
+ appendStringInfoString(es->str, "; ");
+ show_jit_expr_details(expr, es);
+ appendStringInfoChar(es->str, '\n');
+ }
+ }
}
/*
@@ -2167,8 +2200,101 @@ show_expression(Node *node, ExprState *expr, const char *qlabel,
/* Deparse the expression */
exprstr = deparse_expression(node, context, useprefix, false);
- /* And add to es->str */
- ExplainPropertyText(qlabel, exprstr, es);
+ if (!es->jit_details)
+ ExplainPropertyText(qlabel, exprstr, es);
+ else if (es->format != EXPLAIN_FORMAT_TEXT)
+ {
+ ExplainOpenGroup(qlabel, qlabel, true, es);
+
+ ExplainPropertyText("Expr", exprstr, es);
+
+ if (expr != NULL)
+ show_jit_expr_details(expr, es);
+ ExplainCloseGroup(qlabel, qlabel, true, es);
+ }
+ else
+ {
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "%s: %s", qlabel, exprstr);
+
+ if (expr != NULL)
+ {
+ appendStringInfoString(es->str, "; ");
+
+ show_jit_expr_details(expr, es);
+ }
+
+ appendStringInfoChar(es->str, '\n');
+ }
+}
+
+static void
+show_jit_expr_details(ExprState *expr, ExplainState *es)
+{
+ if (expr == NULL)
+ return;
+
+ Assert(es->jit_details);
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ if (expr->flags & EEO_FLAG_JIT_EXPR)
+ appendStringInfo(es->str, "JIT-Expr: %s", expr->expr_funcname);
+ else
+ appendStringInfoString(es->str, "JIT-Expr: false");
+
+ /*
+ * Either show the function name for tuple deforming quoted in "", or
+ * false if JIT compilation was performed, but no code was generated
+ * for deforming the respective attribute.
+ */
+
+ if (expr->scan_funcname)
+ appendStringInfo(es->str, ", JIT-Deform-Scan: %s", expr->scan_funcname);
+ else if (expr->flags & EEO_FLAG_JIT_EXPR &&
+ expr->flags & EEO_FLAG_DEFORM_SCAN)
+ appendStringInfo(es->str, ", JIT-Deform-Scan: false");
+
+ if (expr->outer_funcname)
+ appendStringInfo(es->str, ", JIT-Deform-Outer: %s", expr->outer_funcname);
+ else if (expr->flags & EEO_FLAG_JIT_EXPR &&
+ expr->flags & EEO_FLAG_DEFORM_OUTER)
+ appendStringInfo(es->str, ", JIT-Deform-Outer: false");
+
+ if (expr->inner_funcname)
+ appendStringInfo(es->str, ", JIT-Deform-Inner: %s", expr->inner_funcname);
+ else if (expr->flags & EEO_FLAG_JIT_EXPR &&
+ expr->flags & (EEO_FLAG_DEFORM_INNER))
+ appendStringInfo(es->str, ", JIT-Deform-Inner: false");
+ }
+ else
+ {
+ if (expr->flags & EEO_FLAG_JIT_EXPR)
+ ExplainPropertyText("JIT-Expr", expr->expr_funcname, es);
+ else
+ ExplainPropertyBool("JIT-Expr", false, es);
+
+ if (expr->scan_funcname)
+ ExplainProperty("JIT-Deform-Scan", NULL, expr->scan_funcname, false, es);
+ else if (expr->flags & EEO_FLAG_DEFORM_SCAN)
+ ExplainProperty("JIT-Deform-Scan", NULL, "false", true, es);
+ else
+ ExplainProperty("JIT-Deform-Scan", NULL, "null", true, es);
+
+ if (expr->outer_funcname)
+ ExplainProperty("JIT-Deform-Outer", NULL, expr->outer_funcname, false, es);
+ else if (expr->flags & EEO_FLAG_DEFORM_OUTER)
+ ExplainProperty("JIT-Deform-Outer", NULL, "false", true, es);
+ else
+ ExplainProperty("JIT-Deform-Outer", NULL, "null", true, es);
+
+ if (expr->inner_funcname)
+ ExplainProperty("JIT-Deform-Inner", NULL, expr->inner_funcname, false, es);
+ else if (expr->flags & EEO_FLAG_DEFORM_INNER)
+ ExplainProperty("JIT-Deform-Inner", NULL, "false", true, es);
+ else
+ ExplainProperty("JIT-Deform-Inner", NULL, "null", true, es);
+ }
}
/*
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 39442f8866f..2c792d59b58 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -2372,6 +2372,7 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
TupleDesc desc = NULL;
const TupleTableSlotOps *tts_ops = NULL;
bool isfixed = false;
+ ExprEvalOp opcode = op->opcode;
if (op->d.fetch.known_desc != NULL)
{
@@ -2444,6 +2445,13 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
op->d.fetch.kind = NULL;
op->d.fetch.known_desc = NULL;
}
+
+ if (opcode == EEOP_INNER_FETCHSOME)
+ state->flags |= EEO_FLAG_DEFORM_INNER;
+ else if (opcode == EEOP_OUTER_FETCHSOME)
+ state->flags |= EEO_FLAG_DEFORM_OUTER;
+ else if (opcode == EEOP_SCAN_FETCHSOME)
+ state->flags |= EEO_FLAG_DEFORM_SCAN;
}
/*
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index 7efc8f23ee3..d1d07751698 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -145,6 +145,7 @@ llvm_compile_expr(ExprState *state)
funcname = llvm_expand_funcname(context, "evalexpr");
context->base.instr.created_expr_functions++;
+ state->expr_funcname = funcname;
/* Create the signature and function */
{
@@ -336,6 +337,13 @@ llvm_compile_expr(ExprState *state)
LLVMBuildCall(b, l_jit_deform,
params, lengthof(params), "");
+
+ if (opcode == EEOP_INNER_FETCHSOME)
+ state->inner_funcname = pstrdup(LLVMGetValueName(l_jit_deform));
+ else if (opcode == EEOP_OUTER_FETCHSOME)
+ state->outer_funcname = pstrdup(LLVMGetValueName(l_jit_deform));
+ else
+ state->scan_funcname = pstrdup(LLVMGetValueName(l_jit_deform));
}
else
{
@@ -2462,6 +2470,7 @@ llvm_compile_expr(ExprState *state)
INSTR_TIME_SET_CURRENT(endtime);
INSTR_TIME_ACCUM_DIFF(context->base.instr.generation_counter,
endtime, starttime);
+ state->flags |= EEO_FLAG_JIT_EXPR;
return true;
}
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index 8639891c164..5dbbeb3a3c3 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -36,6 +36,7 @@ typedef struct ExplainState
bool timing; /* print detailed node timing */
bool summary; /* print total planning and execution timing */
bool settings; /* print modified settings */
+ bool jit_details; /* print per-expression details about JIT */
ExplainFormat format; /* output format */
/* state for output formatting --- not reset for each new plan tree */
int indent; /* current indentation level */
diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h
index d21dbead0a2..5ebe50df888 100644
--- a/src/include/executor/execExpr.h
+++ b/src/include/executor/execExpr.h
@@ -26,6 +26,12 @@ struct SubscriptingRefState;
#define EEO_FLAG_INTERPRETER_INITIALIZED (1 << 1)
/* jump-threading is in use */
#define EEO_FLAG_DIRECT_THREADED (1 << 2)
+/* is expression jit compiled */
+#define EEO_FLAG_JIT_EXPR (1 << 3)
+/* does expression require tuple deforming */
+#define EEO_FLAG_DEFORM_INNER (1 << 4)
+#define EEO_FLAG_DEFORM_OUTER (1 << 5)
+#define EEO_FLAG_DEFORM_SCAN (1 << 6)
/* Typical API for out-of-line evaluation subroutines */
typedef void (*ExecEvalSubroutine) (ExprState *state,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 44f76082e99..d0b290fb342 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -113,6 +113,11 @@ typedef struct ExprState
Datum *innermost_domainval;
bool *innermost_domainnull;
+
+ const char *expr_funcname;
+ const char *outer_funcname;
+ const char *inner_funcname;
+ const char *scan_funcname;
} ExprState;
--
2.23.0.162.gf1d4a28250
v1-0005-jit-explain-remove-backend-lifetime-module-count-.patchtext/x-diff; charset=us-asciiDownload
From f8bfa8fcd63ccae2b3d0852f614f9fdf19f989e6 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 14:05:08 -0700
Subject: [PATCH v1 05/12] jit: explain: remove backend lifetime module count
from function name.
Also expand function name to include in which module the function is -
without that it's harder to analyze which functions were emitted
separately (a performance concern).
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 65 +++++++++++++++++++++++++++++-----
src/backend/jit/llvm/llvmjit.c | 18 +++++++---
src/include/jit/llvmjit.h | 5 ++-
3 files changed, 75 insertions(+), 13 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 3ccb76bdfd1..02455865d9f 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -2228,6 +2228,43 @@ show_expression(Node *node, ExprState *expr, const char *qlabel,
}
}
+/*
+ * To make JIT explain output reproducible, remove the module generation from
+ * function names. That makes it a bit harder to correlate with profiles etc,
+ * but reproducability is more important.
+ */
+static char *
+jit_funcname_for_display(const char *funcname)
+{
+ int func_counter; /* nth function in query */
+ size_t mod_num; /* nth module in query */
+ size_t mod_generation; /* nth module in backend */
+ int basename_end;
+ int matchcount = 0;
+
+ /*
+ * The pattern we need to match, see llvm_expand_funcname, is
+ * "%s_%zu_%d_mod_%zu". Find the fourth _ from the end, so a _ in the name
+ * is OK.
+ */
+ for (basename_end = strlen(funcname); basename_end >= 0; basename_end--)
+ {
+ if (funcname[basename_end] == '_' && ++matchcount == 4)
+ break;
+ }
+
+ /* couldn't parse, bail out */
+ if (matchcount != 4)
+ return pstrdup(funcname);
+
+ /* couldn't parse, bail out */
+ if (sscanf(funcname + basename_end, "_%zu_%d_mod_%zu",
+ &mod_num, &func_counter, &mod_generation) != 3)
+ return pstrdup(funcname);
+
+ return psprintf("%s_%zu_%d", pnstrdup(funcname, basename_end), mod_num, func_counter);
+}
+
static void
show_jit_expr_details(ExprState *expr, ExplainState *es)
{
@@ -2239,7 +2276,8 @@ show_jit_expr_details(ExprState *expr, ExplainState *es)
if (es->format == EXPLAIN_FORMAT_TEXT)
{
if (expr->flags & EEO_FLAG_JIT_EXPR)
- appendStringInfo(es->str, "JIT-Expr: %s", expr->expr_funcname);
+ appendStringInfo(es->str, "JIT-Expr: %s",
+ jit_funcname_for_display(expr->expr_funcname));
else
appendStringInfoString(es->str, "JIT-Expr: false");
@@ -2250,19 +2288,22 @@ show_jit_expr_details(ExprState *expr, ExplainState *es)
*/
if (expr->scan_funcname)
- appendStringInfo(es->str, ", JIT-Deform-Scan: %s", expr->scan_funcname);
+ appendStringInfo(es->str, ", JIT-Deform-Scan: %s",
+ jit_funcname_for_display(expr->scan_funcname));
else if (expr->flags & EEO_FLAG_JIT_EXPR &&
expr->flags & EEO_FLAG_DEFORM_SCAN)
appendStringInfo(es->str, ", JIT-Deform-Scan: false");
if (expr->outer_funcname)
- appendStringInfo(es->str, ", JIT-Deform-Outer: %s", expr->outer_funcname);
+ appendStringInfo(es->str, ", JIT-Deform-Outer: %s",
+ jit_funcname_for_display(expr->outer_funcname));
else if (expr->flags & EEO_FLAG_JIT_EXPR &&
expr->flags & EEO_FLAG_DEFORM_OUTER)
appendStringInfo(es->str, ", JIT-Deform-Outer: false");
if (expr->inner_funcname)
- appendStringInfo(es->str, ", JIT-Deform-Inner: %s", expr->inner_funcname);
+ appendStringInfo(es->str, ", JIT-Deform-Inner: %s",
+ jit_funcname_for_display(expr->inner_funcname));
else if (expr->flags & EEO_FLAG_JIT_EXPR &&
expr->flags & (EEO_FLAG_DEFORM_INNER))
appendStringInfo(es->str, ", JIT-Deform-Inner: false");
@@ -2270,26 +2311,34 @@ show_jit_expr_details(ExprState *expr, ExplainState *es)
else
{
if (expr->flags & EEO_FLAG_JIT_EXPR)
- ExplainPropertyText("JIT-Expr", expr->expr_funcname, es);
+ ExplainPropertyText("JIT-Expr",
+ jit_funcname_for_display(expr->expr_funcname),
+ es);
else
ExplainPropertyBool("JIT-Expr", false, es);
if (expr->scan_funcname)
- ExplainProperty("JIT-Deform-Scan", NULL, expr->scan_funcname, false, es);
+ ExplainProperty("JIT-Deform-Scan", NULL,
+ jit_funcname_for_display(expr->scan_funcname),
+ false, es);
else if (expr->flags & EEO_FLAG_DEFORM_SCAN)
ExplainProperty("JIT-Deform-Scan", NULL, "false", true, es);
else
ExplainProperty("JIT-Deform-Scan", NULL, "null", true, es);
if (expr->outer_funcname)
- ExplainProperty("JIT-Deform-Outer", NULL, expr->outer_funcname, false, es);
+ ExplainProperty("JIT-Deform-Outer", NULL,
+ jit_funcname_for_display(expr->outer_funcname),
+ false, es);
else if (expr->flags & EEO_FLAG_DEFORM_OUTER)
ExplainProperty("JIT-Deform-Outer", NULL, "false", true, es);
else
ExplainProperty("JIT-Deform-Outer", NULL, "null", true, es);
if (expr->inner_funcname)
- ExplainProperty("JIT-Deform-Inner", NULL, expr->inner_funcname, false, es);
+ ExplainProperty("JIT-Deform-Inner", NULL,
+ jit_funcname_for_display(expr->inner_funcname),
+ false, es);
else if (expr->flags & EEO_FLAG_DEFORM_INNER)
ExplainProperty("JIT-Deform-Inner", NULL, "false", true, es);
else
diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c
index 5489e118041..177a00f3826 100644
--- a/src/backend/jit/llvm/llvmjit.c
+++ b/src/backend/jit/llvm/llvmjit.c
@@ -227,6 +227,8 @@ llvm_mutable_module(LLVMJitContext *context)
char *
llvm_expand_funcname(struct LLVMJitContext *context, const char *basename)
{
+ char *funcname;
+
Assert(context->module != NULL);
context->base.instr.created_functions++;
@@ -234,11 +236,19 @@ llvm_expand_funcname(struct LLVMJitContext *context, const char *basename)
/*
* Previously we used dots to separate, but turns out some tools, e.g.
* GDB, don't like that and truncate name.
+ *
+ * Append the backend-lifetime module count to the end, so it's easier for
+ * humans and machines to compare the generated function names across
+ * queries, the prefix will be the same from query execution to query
+ * execution.
*/
- return psprintf("%s_%zu_%d",
- basename,
- context->module_generation,
- context->counter++);
+ funcname = psprintf("%s_%zu_%d_mod_%zu",
+ basename,
+ context->base.instr.created_modules - 1,
+ context->counter++,
+ context->module_generation);
+
+ return funcname;
}
/*
diff --git a/src/include/jit/llvmjit.h b/src/include/jit/llvmjit.h
index 6178864b2e6..e45ff99194f 100644
--- a/src/include/jit/llvmjit.h
+++ b/src/include/jit/llvmjit.h
@@ -41,7 +41,10 @@ typedef struct LLVMJitContext
{
JitContext base;
- /* number of modules created */
+ /*
+ * llvm_generation when ->module was created, monotonically increasing
+ * within the lifetime of a backend.
+ */
size_t module_generation;
/* current, "open for write", module */
--
2.23.0.162.gf1d4a28250
Hi,
On 2019-09-27 00:20:53 -0700, Andres Freund wrote:
Unfortunately I found a performance regression for JITed query
compilation introduced in 12, compared to 11. Fixed in one of the
attached patches (v1-0009-Fix-determination-when-tuple-deforming-can-be-JIT.patch
- which needs a better commit message).The first question is when to push that fix. I'm inclined to just do so
now - as we still do JITed tuple deforming in most cases, as well as
doing so in 11 in the places this patch fixes, the risk of that seems
low. But I can also see an arguments for waiting after 12.0.
Since nobody opined, I now have pushed that, and the other fix mentioned
later in that email.
I'd appreciate comments on the rest of the email, it's clear that we
need to improve the test infrastructure here. And also the explain
output for grouping sets...
Greetings,
Andres Freund
But that's pretty crappy, because it means that the 'shape' of the output depends on the jit_details option.
Yeah, that would be hard to work with. What about adding it as a sibling group?
"Filter": "(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)",
"Filter JIT": {
"Expr": "evalexpr_0_2",
"Deform Scan": "deform_0_3",
"Deform Outer": null,
"Deform Inner": null
}
Also not that pretty, but at least it's easier to work with (I also
changed the dashes to spaces since that's what the rest of EXPLAIN is
doing as a matter of style).
But the compat break due to that change is not small- perhaps we could instead mark that in another way?
We could add a "Projects" boolean key instead? Of course that's more
awkward in text mode. Maybe compat break is less of an issue in text
mode and we can treat this differently?
I'm not sure that 'TRANS' is the best placeholder for the transition value here. Maybe $TRANS would be clearer?
+1, I think the `$` makes it clearer that this is not a literal expression.
For HashJoin/Hash I've added 'Outer Hash Key' and 'Hash Key' for each key, but only in verbose mode.
That reads pretty well to me. What does the structured output look like?
On Fri, Sep 27, 2019 at 3:21 AM Andres Freund <andres@anarazel.de> wrote:
- JIT-Expr: whether the expression was JIT compiled (might e.g. not be
the case because no parent was provided)
- JIT-Deform-{Scan,Outer,Inner}: wether necessary, and whether JIT accelerated.I don't like these names much, but ...
For the deform cases I chose to display
a) the function name if JIT compiled
b) "false" if the expression is JIT compiled, deforming is
necessary, but deforming is not JIT compiled (e.g. because the slot type
wasn't fixed)
c) "null" if not necessary, with that being omitted in text mode.
I mean, why not just omit in all modes if it's not necessary? I don't
see that making the information we produce randomly inconsistent
between modes is buying us anything.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Hi,
On 2019-10-28 15:05:01 -0400, Robert Haas wrote:
On Fri, Sep 27, 2019 at 3:21 AM Andres Freund <andres@anarazel.de> wrote:
- JIT-Expr: whether the expression was JIT compiled (might e.g. not be
the case because no parent was provided)
- JIT-Deform-{Scan,Outer,Inner}: wether necessary, and whether JIT accelerated.I don't like these names much, but ...
For the deform cases I chose to display
a) the function name if JIT compiled
b) "false" if the expression is JIT compiled, deforming is
necessary, but deforming is not JIT compiled (e.g. because the slot type
wasn't fixed)
c) "null" if not necessary, with that being omitted in text mode.I mean, why not just omit in all modes if it's not necessary? I don't
see that making the information we produce randomly inconsistent
between modes is buying us anything.
Because that's the normal way to represent something non-existing for
formats like json? There's a lot of information we show always for !text
format, even if not really applicable to the context (e.g. Triggers for
select statements). I think there's an argument to made to deviate in
this case, but I don't think it's obvious.
Abstract formatting reasons aside, it's actually useful to see where we
know we're dealing with tuples that don't need to be deformed and thus
overhead due to that cannot be relevant. Not sure if there's sufficient
consumers for that, but ... We e.g. should verify that the "none"
doesn't suddenly vanish, because we broke the information that let us
infer that we don't need tuple deforming - and that's easier to
understand if there's an explicit field, rather than reasining from
absence. IMO.
Greetings,
Andres Freund
Hi,
On 2019-10-28 11:27:02 -0700, Maciek Sakrejda wrote:
But that's pretty crappy, because it means that the 'shape' of the output depends on the jit_details option.
Yeah, that would be hard to work with. What about adding it as a sibling group?
"Filter": "(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)",
"Filter JIT": {
"Expr": "evalexpr_0_2",
"Deform Scan": "deform_0_3",
"Deform Outer": null,
"Deform Inner": null
}Also not that pretty, but at least it's easier to work with
What I dislike about that is that it basically again is introducing
something that requires either pattern matching on key names (i.e. a key
of '(.*) JIT' is one that has information about JIT, and the associated
expresssion is in key $1), or knowing all the potential keys an
expression could be in.
(I also
changed the dashes to spaces since that's what the rest of EXPLAIN is
doing as a matter of style).
That makes sense.
But the compat break due to that change is not small- perhaps we could instead mark that in another way?
We could add a "Projects" boolean key instead? Of course that's more
awkward in text mode. Maybe compat break is less of an issue in text
mode and we can treat this differently?
Yea, I think projects as a key for each node makes sense. For text mode
I guess we could just display the key on the same line when es->verbose
is set? Still not sure if not just changing the output is the better
approach.
Another alternative would be to just remove the 'Output' line when a
node doesn't project - it can't really carry meaning in those cases
anyway?
For HashJoin/Hash I've added 'Outer Hash Key' and 'Hash Key' for each key, but only in verbose mode.
That reads pretty well to me. What does the structured output look
like?
Just a new "Outer Hash Key" for the HashJoin node, and "Hash Key" for
the Hash node. Perhaps the latter should be 'Inner Hash Key' - while
that's currently a bit confusing because of Hash's subtree being the
outer tree, it'd reduce changes when merging Hash into HashJoin [1]/messages/by-id/20191028231526.wcnwag7lllkra4qt@alap3.anarazel.de, and
it's clearer when looking at the HashJoin node itself.
Here's an example query:
EXPLAIN (VERBOSE, FORMAT JSON, COSTS OFF) SELECT pc.oid::regclass, pc.relkind, pc.relfilenode, pc_t.oid::regclass as toast_rel, pc_t.relfilenode as toast_relfilenode FROM pg_class pc LEFT OUTER JOIN pg_class pc_t ON (pc.reltoastrelid = pc_t.oid);
[
{
"Plan": {
"Node Type": "Hash Join",
"Parallel Aware": false,
"Join Type": "Left",
"Project": ["(pc.oid)::regclass", "pc.relkind", "pc.relfilenode", "(pc_t.oid)::regclass", "pc_t.relfilenode"],
"Inner Unique": true,
"Hash Cond": "(pc.reltoastrelid = pc_t.oid)",
"Outer Hash Key": "pc.reltoastrelid",
"Plans": [
{
"Node Type": "Seq Scan",
"Parent Relationship": "Outer",
"Parallel Aware": false,
"Relation Name": "pg_class",
"Schema": "pg_catalog",
"Alias": "pc",
"Output": ["pc.oid", "pc.relname", "pc.relnamespace", "pc.reltype", "pc.reloftype", "pc.relowner", "pc.relam", "pc.relfilenode", "pc.reltablespace", "pc.relpages", "pc.reltuples", "pc.relallvisible", "pc.reltoastrelid", "pc.relhasindex", "pc.relisshared", "pc.relpersistence", "pc.relkind", "pc.relnatts", "pc.relchecks", "pc.relhasrules", "pc.relhastriggers", "pc.relhassubclass", "pc.relrowsecurity", "pc.relforcerowsecurity", "pc.relispopulated", "pc.relreplident", "pc.relispartition", "pc.relrewrite", "pc.relfrozenxid", "pc.relminmxid", "pc.relacl", "pc.reloptions", "pc.relpartbound"]
},
{
"Node Type": "Hash",
"Parent Relationship": "Inner",
"Parallel Aware": false,
"Output": ["pc_t.oid", "pc_t.relfilenode"],
"Hash Key": "pc_t.oid",
"Plans": [
{
"Node Type": "Seq Scan",
"Parent Relationship": "Outer",
"Parallel Aware": false,
"Relation Name": "pg_class",
"Schema": "pg_catalog",
"Alias": "pc_t",
"Project": ["pc_t.oid", "pc_t.relfilenode"]
}
]
}
]
}
}
]
and in plain text:
Hash Left Join
Project: (pc.oid)::regclass, pc.relkind, pc.relfilenode, (pc_t.oid)::regclass, pc_t.relfilenode
Inner Unique: true
Hash Cond: (pc.reltoastrelid = pc_t.oid)
Outer Hash Key: pc.reltoastrelid
-> Seq Scan on pg_catalog.pg_class pc
Output: pc.oid, pc.relname, pc.relnamespace, pc.reltype, pc.reloftype, pc.relowner, pc.relam, pc.relfilenode, pc.reltablespace, pc.relpages, pc.reltuples, pc.relallvisible, pc.reltoastrelid, pc.relhasindex, pc.relisshared, pc.relpersistence, pc.relkind, pc.relnatts, pc.relchecks, pc.relhasrules, pc.relhastriggers, pc.relhassubclass, pc.relrowsecurity, pc.relforcerowsecurity, pc.relispopulated, pc.relreplident, pc.relispartition, pc.relrewrite, pc.relfrozenxid, pc.relminmxid, pc.relacl, pc.reloptions, pc.relpartbound
-> Hash
Output: pc_t.oid, pc_t.relfilenode
Hash Key: pc_t.oid
-> Seq Scan on pg_catalog.pg_class pc_t
Project: pc_t.oid, pc_t.relfilenode
which also serves as an example about my previous point about
potentially just hiding the 'Output: ' bit when no projection is done:
It's very verbose, without adding much, while hiding that there's
actually nothing being done at the SeqScan level.
I've attached a rebased version of the patcheset. No changes except for
a minor conflict, and removing some already applied bugfixes.
Greetings,
Andres Freund
[1]: /messages/by-id/20191028231526.wcnwag7lllkra4qt@alap3.anarazel.de
Attachments:
v2-0001-jit-Instrument-function-purpose-separately-add-tr.patchtext/x-diff; charset=us-asciiDownload
From 482f67d9909d81a742b5707eb0a1a34fe4fb5d04 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 11:44:53 -0700
Subject: [PATCH v2 1/8] jit: Instrument function purpose separately, add
tracking of modules.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 24 +++++++++++++++++++++++-
src/backend/jit/jit.c | 3 +++
src/backend/jit/llvm/llvmjit.c | 2 ++
src/backend/jit/llvm/llvmjit_deform.c | 1 +
src/backend/jit/llvm/llvmjit_expr.c | 1 +
src/include/jit/jit.h | 11 ++++++++++-
6 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 62fb3434a32..ef65035bfba 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -825,7 +825,26 @@ ExplainPrintJIT(ExplainState *es, int jit_flags,
appendStringInfoString(es->str, "JIT:\n");
es->indent += 1;
- ExplainPropertyInteger("Functions", NULL, ji->created_functions, es);
+ /* having to emit code more than once has performance consequences */
+ if (ji->created_modules > 1)
+ ExplainPropertyInteger("Modules", NULL, ji->created_modules, es);
+
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "Functions: %zu", ji->created_functions);
+ if (ji->created_expr_functions > 0 || ji->created_deform_functions)
+ {
+ appendStringInfoString(es->str, " (");
+ if (ji->created_expr_functions)
+ {
+ appendStringInfo(es->str, "%zu for expression evaluation", ji->created_expr_functions);
+ if (ji->created_deform_functions)
+ appendStringInfoString(es->str, ", ");
+ }
+ if (ji->created_deform_functions)
+ appendStringInfo(es->str, "%zu for tuple deforming", ji->created_deform_functions);
+ appendStringInfoChar(es->str, ')');
+ }
+ appendStringInfoChar(es->str, '\n');
appendStringInfoSpaces(es->str, es->indent * 2);
appendStringInfo(es->str, "Options: %s %s, %s %s, %s %s, %s %s\n",
@@ -851,7 +870,10 @@ ExplainPrintJIT(ExplainState *es, int jit_flags,
else
{
ExplainPropertyInteger("Worker Number", NULL, worker_num, es);
+ ExplainPropertyInteger("Modules", NULL, ji->created_modules, es);
ExplainPropertyInteger("Functions", NULL, ji->created_functions, es);
+ ExplainPropertyInteger("Expression Functions", NULL, ji->created_expr_functions, es);
+ ExplainPropertyInteger("Deforming Functions", NULL, ji->created_deform_functions, es);
ExplainOpenGroup("Options", "Options", true, es);
ExplainPropertyBool("Inlining", jit_flags & PGJIT_INLINE, es);
diff --git a/src/backend/jit/jit.c b/src/backend/jit/jit.c
index 43e65b1a543..63c709002d8 100644
--- a/src/backend/jit/jit.c
+++ b/src/backend/jit/jit.c
@@ -186,7 +186,10 @@ jit_compile_expr(struct ExprState *state)
void
InstrJitAgg(JitInstrumentation *dst, JitInstrumentation *add)
{
+ dst->created_modules += add->created_modules;
dst->created_functions += add->created_functions;
+ dst->created_expr_functions += add->created_expr_functions;
+ dst->created_deform_functions += add->created_deform_functions;
INSTR_TIME_ADD(dst->generation_counter, add->generation_counter);
INSTR_TIME_ADD(dst->inlining_counter, add->inlining_counter);
INSTR_TIME_ADD(dst->optimization_counter, add->optimization_counter);
diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c
index 82c4afb7011..5489e118041 100644
--- a/src/backend/jit/llvm/llvmjit.c
+++ b/src/backend/jit/llvm/llvmjit.c
@@ -212,6 +212,8 @@ llvm_mutable_module(LLVMJitContext *context)
context->module = LLVMModuleCreateWithName("pg");
LLVMSetTarget(context->module, llvm_triple);
LLVMSetDataLayout(context->module, llvm_layout);
+
+ context->base.instr.created_modules++;
}
return context->module;
diff --git a/src/backend/jit/llvm/llvmjit_deform.c b/src/backend/jit/llvm/llvmjit_deform.c
index 835aea83e97..80a85858524 100644
--- a/src/backend/jit/llvm/llvmjit_deform.c
+++ b/src/backend/jit/llvm/llvmjit_deform.c
@@ -101,6 +101,7 @@ slot_compile_deform(LLVMJitContext *context, TupleDesc desc,
mod = llvm_mutable_module(context);
funcname = llvm_expand_funcname(context, "deform");
+ context->base.instr.created_deform_functions++;
/*
* Check which columns have to exist, so we don't have to check the row's
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index d09324637b9..4ba8c78cbc9 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -144,6 +144,7 @@ llvm_compile_expr(ExprState *state)
b = LLVMCreateBuilder();
funcname = llvm_expand_funcname(context, "evalexpr");
+ context->base.instr.created_expr_functions++;
/* Create the signature and function */
{
diff --git a/src/include/jit/jit.h b/src/include/jit/jit.h
index d879cef20f3..668f965cb0a 100644
--- a/src/include/jit/jit.h
+++ b/src/include/jit/jit.h
@@ -26,9 +26,18 @@
typedef struct JitInstrumentation
{
- /* number of emitted functions */
+ /* number of modules (i.e. separate optimize / link cycles) created */
+ size_t created_modules;
+
+ /* number of functions generated */
size_t created_functions;
+ /* number of expression evaluation functions generated */
+ size_t created_expr_functions;
+
+ /* number of tuple deforming functions generated */
+ size_t created_deform_functions;
+
/* accumulated time to generate code */
instr_time generation_counter;
--
2.23.0.385.gbc12974a89
v2-0002-Refactor-explain.c-to-pass-ExprState-down-to-show.patchtext/x-diff; charset=us-asciiDownload
From 1b6e757aa57c8e71e611cf2478bcaded7266d1f2 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 11:56:53 -0700
Subject: [PATCH v2 2/8] Refactor explain.c to pass ExprState down to
show_expression() where available.
This will, in a later patch, allow to display per-expression
information about JIT compilation.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 105 ++++++++++++++++++++++-----------
1 file changed, 69 insertions(+), 36 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index ef65035bfba..48283ba82a6 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -66,16 +66,16 @@ static void ExplainNode(PlanState *planstate, List *ancestors,
ExplainState *es);
static void show_plan_tlist(PlanState *planstate, List *ancestors,
ExplainState *es);
-static void show_expression(Node *node, const char *qlabel,
+static void show_expression(Node *node, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
-static void show_qual(List *qual, const char *qlabel,
+static void show_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
-static void show_scan_qual(List *qual, const char *qlabel,
+static void show_scan_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es);
-static void show_upper_qual(List *qual, const char *qlabel,
+static void show_upper_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es);
static void show_sort_keys(SortState *sortstate, List *ancestors,
@@ -1605,26 +1605,31 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
case T_IndexScan:
show_scan_qual(((IndexScan *) plan)->indexqualorig,
+ ((IndexScanState *) planstate)->indexqualorig,
"Index Cond", planstate, ancestors, es);
if (((IndexScan *) plan)->indexqualorig)
show_instrumentation_count("Rows Removed by Index Recheck", 2,
planstate, es);
show_scan_qual(((IndexScan *) plan)->indexorderbyorig,
+ NULL,
"Order By", planstate, ancestors, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
break;
case T_IndexOnlyScan:
show_scan_qual(((IndexOnlyScan *) plan)->indexqual,
+ ((IndexOnlyScanState *) planstate)->indexqual,
"Index Cond", planstate, ancestors, es);
if (((IndexOnlyScan *) plan)->indexqual)
show_instrumentation_count("Rows Removed by Index Recheck", 2,
planstate, es);
- show_scan_qual(((IndexOnlyScan *) plan)->indexorderby,
+ show_scan_qual(((IndexOnlyScan *) plan)->indexorderby, NULL,
"Order By", planstate, ancestors, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1633,16 +1638,18 @@ ExplainNode(PlanState *planstate, List *ancestors,
planstate->instrument->ntuples2, 0, es);
break;
case T_BitmapIndexScan:
- show_scan_qual(((BitmapIndexScan *) plan)->indexqualorig,
+ show_scan_qual(((BitmapIndexScan *) plan)->indexqualorig, NULL,
"Index Cond", planstate, ancestors, es);
break;
case T_BitmapHeapScan:
show_scan_qual(((BitmapHeapScan *) plan)->bitmapqualorig,
+ ((BitmapHeapScanState *) planstate)->bitmapqualorig,
"Recheck Cond", planstate, ancestors, es);
if (((BitmapHeapScan *) plan)->bitmapqualorig)
show_instrumentation_count("Rows Removed by Index Recheck", 2,
planstate, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1660,7 +1667,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_NamedTuplestoreScan:
case T_WorkTableScan:
case T_SubqueryScan:
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1669,7 +1677,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
Gather *gather = (Gather *) plan;
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1715,7 +1724,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
GatherMerge *gm = (GatherMerge *) plan;
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1749,11 +1759,12 @@ ExplainNode(PlanState *planstate, List *ancestors,
fexprs = lappend(fexprs, rtfunc->funcexpr);
}
/* We rely on show_expression to insert commas as needed */
- show_expression((Node *) fexprs,
+ show_expression((Node *) fexprs, NULL,
"Function Call", planstate, ancestors,
es->verbose, es);
}
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1763,11 +1774,12 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
TableFunc *tablefunc = ((TableFuncScan *) plan)->tablefunc;
- show_expression((Node *) tablefunc,
+ show_expression((Node *) tablefunc, NULL,
"Table Function Call", planstate, ancestors,
es->verbose, es);
}
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1782,15 +1794,18 @@ ExplainNode(PlanState *planstate, List *ancestors,
if (list_length(tidquals) > 1)
tidquals = list_make1(make_orclause(tidquals));
- show_scan_qual(tidquals, "TID Cond", planstate, ancestors, es);
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(tidquals, NULL, "TID Cond", planstate,
+ ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
}
break;
case T_ForeignScan:
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1800,7 +1815,8 @@ ExplainNode(PlanState *planstate, List *ancestors,
{
CustomScanState *css = (CustomScanState *) planstate;
- show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_scan_qual(plan->qual, planstate->qual, "Filter",
+ planstate, ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1810,51 +1826,60 @@ ExplainNode(PlanState *planstate, List *ancestors,
break;
case T_NestLoop:
show_upper_qual(((NestLoop *) plan)->join.joinqual,
+ ((NestLoopState *) planstate)->js.joinqual,
"Join Filter", planstate, ancestors, es);
if (((NestLoop *) plan)->join.joinqual)
show_instrumentation_count("Rows Removed by Join Filter", 1,
planstate, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
break;
case T_MergeJoin:
- show_upper_qual(((MergeJoin *) plan)->mergeclauses,
+ show_upper_qual(((MergeJoin *) plan)->mergeclauses, NULL,
"Merge Cond", planstate, ancestors, es);
show_upper_qual(((MergeJoin *) plan)->join.joinqual,
+ ((MergeJoinState *) planstate)->js.joinqual,
"Join Filter", planstate, ancestors, es);
if (((MergeJoin *) plan)->join.joinqual)
show_instrumentation_count("Rows Removed by Join Filter", 1,
planstate, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
break;
case T_HashJoin:
show_upper_qual(((HashJoin *) plan)->hashclauses,
+ ((HashJoinState *) planstate)->hashclauses,
"Hash Cond", planstate, ancestors, es);
show_upper_qual(((HashJoin *) plan)->join.joinqual,
+ ((HashJoinState *) planstate)->js.joinqual,
"Join Filter", planstate, ancestors, es);
if (((HashJoin *) plan)->join.joinqual)
show_instrumentation_count("Rows Removed by Join Filter", 1,
planstate, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
break;
case T_Agg:
show_agg_keys(castNode(AggState, planstate), ancestors, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
break;
case T_Group:
show_group_keys(castNode(GroupState, planstate), ancestors, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -1869,8 +1894,10 @@ ExplainNode(PlanState *planstate, List *ancestors,
break;
case T_Result:
show_upper_qual((List *) ((Result *) plan)->resconstantqual,
+ ((ResultState *) planstate)->resconstantqual,
"One-Time Filter", planstate, ancestors, es);
- show_upper_qual(plan->qual, "Filter", planstate, ancestors, es);
+ show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
+ ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
@@ -2120,13 +2147,15 @@ show_plan_tlist(PlanState *planstate, List *ancestors, ExplainState *es)
* Show a generic expression
*/
static void
-show_expression(Node *node, const char *qlabel,
+show_expression(Node *node, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es)
{
List *context;
char *exprstr;
+ Assert(expr == NULL || IsA(expr, ExprState));
+
/* Set up deparsing context */
context = set_deparse_context_planstate(es->deparse_cxt,
(Node *) planstate,
@@ -2143,7 +2172,7 @@ show_expression(Node *node, const char *qlabel,
* Show a qualifier expression (which is a List with implicit AND semantics)
*/
static void
-show_qual(List *qual, const char *qlabel,
+show_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es)
{
@@ -2153,39 +2182,43 @@ show_qual(List *qual, const char *qlabel,
if (qual == NIL)
return;
+ Assert(expr == NULL ||
+ (IsA(expr, ExprState) &&
+ (expr->flags & EEO_FLAG_IS_QUAL)));
+
/* Convert AND list to explicit AND */
node = (Node *) make_ands_explicit(qual);
/* And show it */
- show_expression(node, qlabel, planstate, ancestors, useprefix, es);
+ show_expression(node, expr, qlabel, planstate, ancestors, useprefix, es);
}
/*
* Show a qualifier expression for a scan plan node
*/
static void
-show_scan_qual(List *qual, const char *qlabel,
+show_scan_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es)
{
bool useprefix;
useprefix = (IsA(planstate->plan, SubqueryScan) ||es->verbose);
- show_qual(qual, qlabel, planstate, ancestors, useprefix, es);
+ show_qual(qual, expr, qlabel, planstate, ancestors, useprefix, es);
}
/*
* Show a qualifier expression for an upper-level plan node
*/
static void
-show_upper_qual(List *qual, const char *qlabel,
+show_upper_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
ExplainState *es)
{
bool useprefix;
useprefix = (list_length(es->rtable) > 1 || es->verbose);
- show_qual(qual, qlabel, planstate, ancestors, useprefix, es);
+ show_qual(qual, expr, qlabel, planstate, ancestors, useprefix, es);
}
/*
@@ -3300,8 +3333,8 @@ show_modifytable_info(ModifyTableState *mtstate, List *ancestors,
/* ON CONFLICT DO UPDATE WHERE qual is specially displayed */
if (node->onConflictWhere)
{
- show_upper_qual((List *) node->onConflictWhere, "Conflict Filter",
- &mtstate->ps, ancestors, es);
+ show_upper_qual((List *) node->onConflictWhere, NULL,
+ "Conflict Filter", &mtstate->ps, ancestors, es);
show_instrumentation_count("Rows Removed by Conflict Filter", 1, &mtstate->ps, es);
}
--
2.23.0.385.gbc12974a89
v2-0003-Explain-Differentiate-between-a-node-projecting-o.patchtext/x-diff; charset=us-asciiDownload
From ef85c8213d5fbc04a7d641c506675a055004a2fb Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 12:02:11 -0700
Subject: [PATCH v2 3/8] Explain: Differentiate between a node projecting or
not.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 5 +-
src/test/regress/expected/aggregates.out | 12 +-
src/test/regress/expected/alter_table.out | 12 +-
.../regress/expected/create_function_3.out | 6 +-
src/test/regress/expected/domain.out | 12 +-
src/test/regress/expected/fast_default.out | 10 +-
src/test/regress/expected/inherit.out | 64 ++--
src/test/regress/expected/join.out | 280 +++++++++---------
src/test/regress/expected/join_hash.out | 40 +--
src/test/regress/expected/limit.out | 22 +-
src/test/regress/expected/plpgsql.out | 14 +-
src/test/regress/expected/rangefuncs.out | 10 +-
src/test/regress/expected/rowsecurity.out | 4 +-
src/test/regress/expected/rowtypes.out | 8 +-
src/test/regress/expected/select_distinct.out | 10 +-
src/test/regress/expected/select_parallel.out | 14 +-
src/test/regress/expected/subselect.out | 118 ++++----
src/test/regress/expected/tsrf.out | 24 +-
src/test/regress/expected/updatable_views.out | 30 +-
src/test/regress/expected/update.out | 10 +-
src/test/regress/expected/with.out | 30 +-
src/test/regress/expected/xml.out | 8 +-
src/test/regress/expected/xml_2.out | 8 +-
23 files changed, 377 insertions(+), 374 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 48283ba82a6..ea6b39d5abb 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -2140,7 +2140,10 @@ show_plan_tlist(PlanState *planstate, List *ancestors, ExplainState *es)
}
/* Print results */
- ExplainPropertyList("Output", result, es);
+ if (planstate->ps_ProjInfo)
+ ExplainPropertyList("Project", result, es);
+ else
+ ExplainPropertyList("Output", result, es);
}
/*
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index be4ddf86a43..683bcaedf5f 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -510,12 +510,12 @@ order by 1, 2;
Output: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
Sort Key: s1.s1, s2.s2
-> Nested Loop
- Output: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
+ Project: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
-> Function Scan on pg_catalog.generate_series s1
Output: s1.s1
Function Call: generate_series(1, 3)
-> HashAggregate
- Output: s2.s2, sum((s1.s1 + s2.s2))
+ Project: s2.s2, sum((s1.s1 + s2.s2))
Group Key: s2.s2
-> Function Scan on pg_catalog.generate_series s2
Output: s2.s2
@@ -547,14 +547,14 @@ select array(select sum(x+y) s
QUERY PLAN
-------------------------------------------------------------------
Function Scan on pg_catalog.generate_series x
- Output: (SubPlan 1)
+ Project: (SubPlan 1)
Function Call: generate_series(1, 3)
SubPlan 1
-> Sort
Output: (sum((x.x + y.y))), y.y
Sort Key: (sum((x.x + y.y)))
-> HashAggregate
- Output: sum((x.x + y.y)), y.y
+ Project: sum((x.x + y.y)), y.y
Group Key: y.y
-> Function Scan on pg_catalog.generate_series y
Output: y.y
@@ -2253,12 +2253,12 @@ EXPLAIN (COSTS OFF, VERBOSE)
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate
- Output: variance(unique1), sum((unique1)::bigint), regr_count((unique1)::double precision, (unique1)::double precision)
+ Project: variance(unique1), sum((unique1)::bigint), regr_count((unique1)::double precision, (unique1)::double precision)
-> Gather
Output: (PARTIAL variance(unique1)), (PARTIAL sum((unique1)::bigint)), (PARTIAL regr_count((unique1)::double precision, (unique1)::double precision))
Workers Planned: 4
-> Partial Aggregate
- Output: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint), PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)
+ Project: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint), PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)
-> Parallel Seq Scan on public.tenk1
Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
(9 rows)
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 5189fd81887..bebd41b5f5b 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2381,10 +2381,10 @@ View definition:
FROM at_view_1 v1;
explain (verbose, costs off) select * from at_view_2;
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------
Seq Scan on public.at_base_table bt
- Output: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff))
+ Project: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff))
(2 rows)
select * from at_view_2;
@@ -2421,10 +2421,10 @@ View definition:
FROM at_view_1 v1;
explain (verbose, costs off) select * from at_view_2;
- QUERY PLAN
-----------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------
Seq Scan on public.at_base_table bt
- Output: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff, NULL))
+ Project: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff, NULL))
(2 rows)
select * from at_view_2;
diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out
index ba260df9960..4def18f0e0b 100644
--- a/src/test/regress/expected/create_function_3.out
+++ b/src/test/regress/expected/create_function_3.out
@@ -303,10 +303,10 @@ SELECT voidtest2(11,22);
-- currently, we can inline voidtest2 but not voidtest1
EXPLAIN (verbose, costs off) SELECT voidtest2(11,22);
- QUERY PLAN
--------------------------
+ QUERY PLAN
+--------------------------
Result
- Output: voidtest1(33)
+ Project: voidtest1(33)
(2 rows)
CREATE TEMP TABLE sometable(f1 int);
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 4ff1b4af418..346ccac9279 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -261,11 +261,11 @@ select * from dcomptable;
explain (verbose, costs off)
update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0;
- QUERY PLAN
------------------------------------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------------------------------
Update on public.dcomptable
-> Seq Scan on public.dcomptable
- Output: ROW(((d1).r - '1'::double precision), ((d1).i + '1'::double precision)), ctid
+ Project: ROW(((d1).r - '1'::double precision), ((d1).i + '1'::double precision)), ctid
Filter: ((dcomptable.d1).i > '0'::double precision)
(4 rows)
@@ -397,11 +397,11 @@ select * from dcomptable;
explain (verbose, costs off)
update dcomptable set d1[1].r = d1[1].r - 1, d1[1].i = d1[1].i + 1
where d1[1].i > 0;
- QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------------------
Update on public.dcomptable
-> Seq Scan on public.dcomptable
- Output: (d1[1].r := (d1[1].r - '1'::double precision))[1].i := (d1[1].i + '1'::double precision), ctid
+ Project: (d1[1].r := (d1[1].r - '1'::double precision))[1].i := (d1[1].i + '1'::double precision), ctid
Filter: (dcomptable.d1[1].i > '0'::double precision)
(4 rows)
diff --git a/src/test/regress/expected/fast_default.out b/src/test/regress/expected/fast_default.out
index 10bc5ff757c..177f8911a94 100644
--- a/src/test/regress/expected/fast_default.out
+++ b/src/test/regress/expected/fast_default.out
@@ -300,7 +300,7 @@ SELECT c_bigint, c_text FROM T WHERE c_bigint = -1 LIMIT 1;
Limit
Output: c_bigint, c_text
-> Seq Scan on fast_default.t
- Output: c_bigint, c_text
+ Project: c_bigint, c_text
Filter: (t.c_bigint = '-1'::integer)
(5 rows)
@@ -316,7 +316,7 @@ EXPLAIN (VERBOSE TRUE, COSTS FALSE) SELECT c_bigint, c_text FROM T WHERE c_text
Limit
Output: c_bigint, c_text
-> Seq Scan on fast_default.t
- Output: c_bigint, c_text
+ Project: c_bigint, c_text
Filter: (t.c_text = 'hello'::text)
(5 rows)
@@ -371,7 +371,7 @@ SELECT * FROM T ORDER BY c_bigint, c_text, pk LIMIT 10;
Output: pk, c_bigint, c_text
Sort Key: t.c_bigint, t.c_text, t.pk
-> Seq Scan on fast_default.t
- Output: pk, c_bigint, c_text
+ Project: pk, c_bigint, c_text
(7 rows)
-- LIMIT
@@ -400,7 +400,7 @@ SELECT * FROM T WHERE c_bigint > -1 ORDER BY c_bigint, c_text, pk LIMIT 10;
Output: pk, c_bigint, c_text
Sort Key: t.c_bigint, t.c_text, t.pk
-> Seq Scan on fast_default.t
- Output: pk, c_bigint, c_text
+ Project: pk, c_bigint, c_text
Filter: (t.c_bigint > '-1'::integer)
(8 rows)
@@ -428,7 +428,7 @@ DELETE FROM T WHERE pk BETWEEN 10 AND 20 RETURNING *;
Delete on fast_default.t
Output: pk, c_bigint, c_text
-> Bitmap Heap Scan on fast_default.t
- Output: ctid
+ Project: ctid
Recheck Cond: ((t.pk >= 10) AND (t.pk <= 20))
-> Bitmap Index Scan on t_pkey
Index Cond: ((t.pk >= 10) AND (t.pk <= 20))
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 44d51ed7110..4b8351839a8 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -545,25 +545,25 @@ create table some_tab_child () inherits (some_tab);
insert into some_tab_child values(1,2);
explain (verbose, costs off)
update some_tab set a = a + 1 where false;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+-----------------------------------
Update on public.some_tab
Update on public.some_tab
-> Result
- Output: (a + 1), b, ctid
+ Project: (a + 1), b, ctid
One-Time Filter: false
(5 rows)
update some_tab set a = a + 1 where false;
explain (verbose, costs off)
update some_tab set a = a + 1 where false returning b, a;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+-----------------------------------
Update on public.some_tab
Output: b, a
Update on public.some_tab
-> Result
- Output: (a + 1), b, ctid
+ Project: (a + 1), b, ctid
One-Time Filter: false
(6 rows)
@@ -792,17 +792,17 @@ select NULL::derived::base;
-- remove redundant conversions.
explain (verbose on, costs off) select row(i, b)::more_derived::derived::base from more_derived;
- QUERY PLAN
--------------------------------------------
+ QUERY PLAN
+--------------------------------------------
Seq Scan on public.more_derived
- Output: (ROW(i, b)::more_derived)::base
+ Project: (ROW(i, b)::more_derived)::base
(2 rows)
explain (verbose on, costs off) select (1, 2)::more_derived::derived::base;
- QUERY PLAN
------------------------
+ QUERY PLAN
+------------------------
Result
- Output: '(1)'::base
+ Project: '(1)'::base
(2 rows)
drop table more_derived;
@@ -1405,13 +1405,13 @@ insert into matest3 (name) values ('Test 5');
insert into matest3 (name) values ('Test 6');
set enable_indexscan = off; -- force use of seqscan/sort, so no merge
explain (verbose, costs off) select * from matest0 order by 1-id;
- QUERY PLAN
-------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------
Sort
Output: matest0.id, matest0.name, ((1 - matest0.id))
Sort Key: ((1 - matest0.id))
-> Result
- Output: matest0.id, matest0.name, (1 - matest0.id)
+ Project: matest0.id, matest0.name, (1 - matest0.id)
-> Append
-> Seq Scan on public.matest0
Output: matest0.id, matest0.name
@@ -1438,16 +1438,16 @@ explain (verbose, costs off) select min(1-id) from matest0;
QUERY PLAN
----------------------------------------
Aggregate
- Output: min((1 - matest0.id))
+ Project: min((1 - matest0.id))
-> Append
-> Seq Scan on public.matest0
- Output: matest0.id
+ Project: matest0.id
-> Seq Scan on public.matest1
- Output: matest1.id
+ Project: matest1.id
-> Seq Scan on public.matest2
- Output: matest2.id
+ Project: matest2.id
-> Seq Scan on public.matest3
- Output: matest3.id
+ Project: matest3.id
(11 rows)
select min(1-id) from matest0;
@@ -1460,21 +1460,21 @@ reset enable_indexscan;
set enable_seqscan = off; -- plan with fewest seqscans should be merge
set enable_parallel_append = off; -- Don't let parallel-append interfere
explain (verbose, costs off) select * from matest0 order by 1-id;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Merge Append
Sort Key: ((1 - matest0.id))
-> Index Scan using matest0i on public.matest0
- Output: matest0.id, matest0.name, (1 - matest0.id)
+ Project: matest0.id, matest0.name, (1 - matest0.id)
-> Index Scan using matest1i on public.matest1
- Output: matest1.id, matest1.name, (1 - matest1.id)
+ Project: matest1.id, matest1.name, (1 - matest1.id)
-> Sort
Output: matest2.id, matest2.name, ((1 - matest2.id))
Sort Key: ((1 - matest2.id))
-> Seq Scan on public.matest2
- Output: matest2.id, matest2.name, (1 - matest2.id)
+ Project: matest2.id, matest2.name, (1 - matest2.id)
-> Index Scan using matest3i on public.matest3
- Output: matest3.id, matest3.name, (1 - matest3.id)
+ Project: matest3.id, matest3.name, (1 - matest3.id)
(13 rows)
select * from matest0 order by 1-id;
@@ -1492,29 +1492,29 @@ explain (verbose, costs off) select min(1-id) from matest0;
QUERY PLAN
--------------------------------------------------------------------------
Result
- Output: $0
+ Project: $0
InitPlan 1 (returns $0)
-> Limit
Output: ((1 - matest0.id))
-> Result
- Output: ((1 - matest0.id))
+ Project: ((1 - matest0.id))
-> Merge Append
Sort Key: ((1 - matest0.id))
-> Index Scan using matest0i on public.matest0
- Output: matest0.id, (1 - matest0.id)
+ Project: matest0.id, (1 - matest0.id)
Index Cond: ((1 - matest0.id) IS NOT NULL)
-> Index Scan using matest1i on public.matest1
- Output: matest1.id, (1 - matest1.id)
+ Project: matest1.id, (1 - matest1.id)
Index Cond: ((1 - matest1.id) IS NOT NULL)
-> Sort
Output: matest2.id, ((1 - matest2.id))
Sort Key: ((1 - matest2.id))
-> Bitmap Heap Scan on public.matest2
- Output: matest2.id, (1 - matest2.id)
+ Project: matest2.id, (1 - matest2.id)
Filter: ((1 - matest2.id) IS NOT NULL)
-> Bitmap Index Scan on matest2_pkey
-> Index Scan using matest3i on public.matest3
- Output: matest3.id, (1 - matest3.id)
+ Project: matest3.id, (1 - matest3.id)
Index Cond: ((1 - matest3.id) IS NOT NULL)
(25 rows)
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
index b58d560163b..7f319a79938 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -3264,9 +3264,9 @@ where x = unique1;
QUERY PLAN
-----------------------------------------------------------
Nested Loop
- Output: tenk1.unique1, (1), (random())
+ Project: tenk1.unique1, (1), (random())
-> Result
- Output: 1, random()
+ Project: 1, random()
-> Index Only Scan using tenk1_unique1 on public.tenk1
Output: tenk1.unique1
Index Cond: (tenk1.unique1 = (1))
@@ -3740,14 +3740,14 @@ using (join_key);
QUERY PLAN
--------------------------------------------------------------------------
Nested Loop Left Join
- Output: "*VALUES*".column1, i1.f1, (666)
+ Project: "*VALUES*".column1, i1.f1, (666)
Join Filter: ("*VALUES*".column1 = i1.f1)
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1
-> Materialize
Output: i1.f1, (666)
-> Nested Loop Left Join
- Output: i1.f1, 666
+ Project: i1.f1, 666
-> Seq Scan on public.int4_tbl i1
Output: i1.f1
-> Index Only Scan using tenk1_unique2 on public.tenk1 i2
@@ -3787,34 +3787,34 @@ select t1.* from
on (t1.f1 = b1.d1)
left join int4_tbl i4
on (i8.q2 = i4.f1);
- QUERY PLAN
-----------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------
Hash Left Join
- Output: t1.f1
+ Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q2
+ Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
Output: i8.q2
-> Hash Right Join
- Output: i8.q2
+ Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
-> Hash Join
- Output: i8.q2, (NULL::integer)
+ Project: i8.q2, (NULL::integer)
Hash Cond: (i8.q1 = i8b2.q1)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b2.q1, (NULL::integer)
-> Seq Scan on public.int8_tbl i8b2
- Output: i8b2.q1, NULL::integer
+ Project: i8b2.q1, NULL::integer
-> Hash
Output: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
- Output: i8b1.q2
+ Project: i8b1.q2
-> Hash
Output: i4.f1
-> Seq Scan on public.int4_tbl i4
@@ -3851,23 +3851,23 @@ select t1.* from
QUERY PLAN
----------------------------------------------------------------------------
Hash Left Join
- Output: t1.f1
+ Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q2
+ Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
Output: i8.q2
-> Hash Right Join
- Output: i8.q2
+ Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
-> Hash Right Join
- Output: i8.q2, (NULL::integer)
+ Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
-> Nested Loop
- Output: i8b2.q1, NULL::integer
+ Project: i8b2.q1, NULL::integer
-> Seq Scan on public.int8_tbl i8b2
Output: i8b2.q1, i8b2.q2
-> Materialize
@@ -3879,7 +3879,7 @@ select t1.* from
-> Hash
Output: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
- Output: i8b1.q2
+ Project: i8b1.q2
-> Hash
Output: i4.f1
-> Seq Scan on public.int4_tbl i4
@@ -3917,23 +3917,23 @@ select t1.* from
QUERY PLAN
----------------------------------------------------------------------------
Hash Left Join
- Output: t1.f1
+ Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q2
+ Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
Output: i8.q2
-> Hash Right Join
- Output: i8.q2
+ Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
-> Hash Right Join
- Output: i8.q2, (NULL::integer)
+ Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
-> Hash Join
- Output: i8b2.q1, NULL::integer
+ Project: i8b2.q1, NULL::integer
Hash Cond: (i8b2.q1 = i4b2.f1)
-> Seq Scan on public.int8_tbl i8b2
Output: i8b2.q1, i8b2.q2
@@ -3948,7 +3948,7 @@ select t1.* from
-> Hash
Output: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
- Output: i8b1.q2
+ Project: i8b1.q2
-> Hash
Output: i4.f1
-> Seq Scan on public.int4_tbl i4
@@ -3984,15 +3984,15 @@ select * from
QUERY PLAN
--------------------------------------------------------
Nested Loop Left Join
- Output: t1.f1, i8.q1, i8.q2, t2.f1, i4.f1
+ Project: t1.f1, i8.q1, i8.q2, t2.f1, i4.f1
-> Seq Scan on public.text_tbl t2
Output: t2.f1
-> Materialize
Output: i8.q1, i8.q2, i4.f1, t1.f1
-> Nested Loop
- Output: i8.q1, i8.q2, i4.f1, t1.f1
+ Project: i8.q1, i8.q2, i4.f1, t1.f1
-> Nested Loop Left Join
- Output: i8.q1, i8.q2, i4.f1
+ Project: i8.q1, i8.q2, i4.f1
Join Filter: (i8.q1 = i4.f1)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
@@ -4031,10 +4031,10 @@ where t1.f1 = ss.f1;
QUERY PLAN
--------------------------------------------------
Nested Loop
- Output: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
+ Project: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
Join Filter: (t1.f1 = t2.f1)
-> Nested Loop Left Join
- Output: t1.f1, i8.q1, i8.q2
+ Project: t1.f1, i8.q1, i8.q2
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
@@ -4045,7 +4045,7 @@ where t1.f1 = ss.f1;
-> Limit
Output: (i8.q1), t2.f1
-> Seq Scan on public.text_tbl t2
- Output: i8.q1, t2.f1
+ Project: i8.q1, t2.f1
(16 rows)
select * from
@@ -4067,15 +4067,15 @@ select * from
lateral (select i8.q1, t2.f1 from text_tbl t2 limit 1) as ss1,
lateral (select ss1.* from text_tbl t3 limit 1) as ss2
where t1.f1 = ss2.f1;
- QUERY PLAN
--------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------
Nested Loop
- Output: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1, ((i8.q1)), (t2.f1)
+ Project: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1, ((i8.q1)), (t2.f1)
Join Filter: (t1.f1 = (t2.f1))
-> Nested Loop
- Output: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
+ Project: t1.f1, i8.q1, i8.q2, (i8.q1), t2.f1
-> Nested Loop Left Join
- Output: t1.f1, i8.q1, i8.q2
+ Project: t1.f1, i8.q1, i8.q2
-> Seq Scan on public.text_tbl t1
Output: t1.f1
-> Materialize
@@ -4086,11 +4086,11 @@ where t1.f1 = ss2.f1;
-> Limit
Output: (i8.q1), t2.f1
-> Seq Scan on public.text_tbl t2
- Output: i8.q1, t2.f1
+ Project: i8.q1, t2.f1
-> Limit
Output: ((i8.q1)), (t2.f1)
-> Seq Scan on public.text_tbl t3
- Output: (i8.q1), t2.f1
+ Project: (i8.q1), t2.f1
(22 rows)
select * from
@@ -4116,11 +4116,11 @@ where tt1.f1 = ss1.c0;
QUERY PLAN
----------------------------------------------------------
Nested Loop
- Output: 1
+ Project: 1
-> Nested Loop Left Join
- Output: tt1.f1, tt4.f1
+ Project: tt1.f1, tt4.f1
-> Nested Loop
- Output: tt1.f1
+ Project: tt1.f1
-> Seq Scan on public.text_tbl tt1
Output: tt1.f1
Filter: (tt1.f1 = 'foo'::text)
@@ -4129,7 +4129,7 @@ where tt1.f1 = ss1.c0;
-> Materialize
Output: tt4.f1
-> Nested Loop Left Join
- Output: tt4.f1
+ Project: tt4.f1
Join Filter: (tt3.f1 = tt4.f1)
-> Seq Scan on public.text_tbl tt3
Output: tt3.f1
@@ -4143,7 +4143,7 @@ where tt1.f1 = ss1.c0;
-> Limit
Output: (tt4.f1)
-> Seq Scan on public.text_tbl tt5
- Output: tt4.f1
+ Project: tt4.f1
(29 rows)
select 1 from
@@ -4173,14 +4173,14 @@ where ss1.c2 = 0;
QUERY PLAN
------------------------------------------------------------------------
Nested Loop
- Output: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
+ Project: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
-> Hash Join
- Output: i41.f1, i42.f1, i8.q1, i8.q2, i43.f1, 42
+ Project: i41.f1, i42.f1, i8.q1, i8.q2, i43.f1, 42
Hash Cond: (i41.f1 = i42.f1)
-> Nested Loop
- Output: i8.q1, i8.q2, i43.f1, i41.f1
+ Project: i8.q1, i8.q2, i43.f1, i41.f1
-> Nested Loop
- Output: i8.q1, i8.q2, i43.f1
+ Project: i8.q1, i8.q2, i43.f1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
Filter: (i8.q1 = 0)
@@ -4196,7 +4196,7 @@ where ss1.c2 = 0;
-> Limit
Output: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
-> Seq Scan on public.text_tbl
- Output: i41.f1, i8.q1, i8.q2, i42.f1, i43.f1, (42)
+ Project: i41.f1, i8.q1, i8.q2, i42.f1, i43.f1, (42)
(25 rows)
select ss2.* from
@@ -4281,22 +4281,22 @@ explain (verbose, costs off)
select a.q2, b.q1
from int8_tbl a left join int8_tbl b on a.q2 = coalesce(b.q1, 1)
where coalesce(b.q1, 1) > 0;
- QUERY PLAN
----------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------
Merge Left Join
- Output: a.q2, b.q1
+ Project: a.q2, b.q1
Merge Cond: (a.q2 = (COALESCE(b.q1, '1'::bigint)))
Filter: (COALESCE(b.q1, '1'::bigint) > 0)
-> Sort
Output: a.q2
Sort Key: a.q2
-> Seq Scan on public.int8_tbl a
- Output: a.q2
+ Project: a.q2
-> Sort
Output: b.q1, (COALESCE(b.q1, '1'::bigint))
Sort Key: (COALESCE(b.q1, '1'::bigint))
-> Seq Scan on public.int8_tbl b
- Output: b.q1, COALESCE(b.q1, '1'::bigint)
+ Project: b.q1, COALESCE(b.q1, '1'::bigint)
(14 rows)
select a.q2, b.q1
@@ -5189,14 +5189,14 @@ explain (verbose, costs off)
select * from
int8_tbl a left join
lateral (select *, a.q2 as x from int8_tbl b) ss on a.q2 = ss.q1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop Left Join
- Output: a.q1, a.q2, b.q1, b.q2, (a.q2)
+ Project: a.q1, a.q2, b.q1, b.q2, (a.q2)
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl b
- Output: b.q1, b.q2, a.q2
+ Project: b.q1, b.q2, a.q2
Filter: (a.q2 = b.q1)
(7 rows)
@@ -5221,14 +5221,14 @@ explain (verbose, costs off)
select * from
int8_tbl a left join
lateral (select *, coalesce(a.q2, 42) as x from int8_tbl b) ss on a.q2 = ss.q1;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Nested Loop Left Join
- Output: a.q1, a.q2, b.q1, b.q2, (COALESCE(a.q2, '42'::bigint))
+ Project: a.q1, a.q2, b.q1, b.q2, (COALESCE(a.q2, '42'::bigint))
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl b
- Output: b.q1, b.q2, COALESCE(a.q2, '42'::bigint)
+ Project: b.q1, b.q2, COALESCE(a.q2, '42'::bigint)
Filter: (a.q2 = b.q1)
(7 rows)
@@ -5257,7 +5257,7 @@ select * from int4_tbl i left join
QUERY PLAN
-------------------------------------------
Hash Left Join
- Output: i.f1, j.f1
+ Project: i.f1, j.f1
Hash Cond: (i.f1 = j.f1)
-> Seq Scan on public.int4_tbl i
Output: i.f1
@@ -5281,14 +5281,14 @@ select * from int4_tbl i left join
explain (verbose, costs off)
select * from int4_tbl i left join
lateral (select coalesce(i) from int2_tbl j where i.f1 = j.f1) k on true;
- QUERY PLAN
--------------------------------------
+ QUERY PLAN
+--------------------------------------
Nested Loop Left Join
- Output: i.f1, (COALESCE(i.*))
+ Project: i.f1, (COALESCE(i.*))
-> Seq Scan on public.int4_tbl i
- Output: i.f1, i.*
+ Project: i.f1, i.*
-> Seq Scan on public.int2_tbl j
- Output: j.f1, COALESCE(i.*)
+ Project: j.f1, COALESCE(i.*)
Filter: (i.f1 = j.f1)
(7 rows)
@@ -5311,11 +5311,11 @@ select * from int4_tbl a,
QUERY PLAN
-------------------------------------------------
Nested Loop
- Output: a.f1, b.f1, c.q1, c.q2
+ Project: a.f1, b.f1, c.q1, c.q2
-> Seq Scan on public.int4_tbl a
Output: a.f1
-> Hash Left Join
- Output: b.f1, c.q1, c.q2
+ Project: b.f1, c.q1, c.q2
Hash Cond: (b.f1 = c.q1)
-> Seq Scan on public.int4_tbl b
Output: b.f1
@@ -5366,14 +5366,14 @@ select * from
(select b.q1 as bq1, c.q1 as cq1, least(a.q1,b.q1,c.q1) from
int8_tbl b cross join int8_tbl c) ss
on a.q2 = ss.bq1;
- QUERY PLAN
--------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------
Nested Loop Left Join
- Output: a.q1, a.q2, b.q1, c.q1, (LEAST(a.q1, b.q1, c.q1))
+ Project: a.q1, a.q2, b.q1, c.q1, (LEAST(a.q1, b.q1, c.q1))
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Nested Loop
- Output: b.q1, c.q1, LEAST(a.q1, b.q1, c.q1)
+ Project: b.q1, c.q1, LEAST(a.q1, b.q1, c.q1)
-> Seq Scan on public.int8_tbl b
Output: b.q1, b.q2
Filter: (a.q2 = b.q1)
@@ -5442,32 +5442,32 @@ select * from
lateral (select q1, coalesce(ss1.x,q2) as y from int8_tbl d) ss2
) on c.q2 = ss2.q1,
lateral (select ss2.y offset 0) ss3;
- QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint)), d.q1, (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)), ((COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)))
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint)), d.q1, (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)), ((COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)))
-> Hash Right Join
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
-> Nested Loop
- Output: a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
+ Project: a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
-> Hash Left Join
- Output: a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint))
+ Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint))
Hash Cond: (a.q2 = b.q1)
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Hash
Output: b.q1, (COALESCE(b.q2, '42'::bigint))
-> Seq Scan on public.int8_tbl b
- Output: b.q1, COALESCE(b.q2, '42'::bigint)
+ Project: b.q1, COALESCE(b.q2, '42'::bigint)
-> Seq Scan on public.int8_tbl d
- Output: d.q1, COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)
+ Project: d.q1, COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
-> Result
- Output: (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
+ Project: (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
(24 rows)
-- case that breaks the old ph_may_need optimization
@@ -5482,21 +5482,21 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
lateral (select q1, coalesce(ss1.x,q2) as y from int8_tbl d) ss2
) on c.q2 = ss2.q1,
lateral (select * from int4_tbl i where ss2.y > f1) ss3;
- QUERY PLAN
----------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------
Nested Loop
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, i.f1
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, i.f1
Join Filter: ((COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)) > i.f1)
-> Hash Right Join
- Output: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
+ Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
-> Nested Loop
- Output: a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
+ Project: a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
-> Hash Right Join
- Output: a.q1, a.q2, b.q1, (COALESCE(b.q2, (b2.f1)::bigint))
+ Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, (b2.f1)::bigint))
Hash Cond: (b.q1 = a.q2)
-> Nested Loop
- Output: b.q1, COALESCE(b.q2, (b2.f1)::bigint)
+ Project: b.q1, COALESCE(b.q2, (b2.f1)::bigint)
Join Filter: (b.q1 < b2.f1)
-> Seq Scan on public.int8_tbl b
Output: b.q1, b.q2
@@ -5509,7 +5509,7 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl d
- Output: d.q1, COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)
+ Project: d.q1, COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
-> Seq Scan on public.int8_tbl c
@@ -5530,16 +5530,16 @@ select * from
QUERY PLAN
----------------------------------------------
Nested Loop Left Join
- Output: (1), (2), (3)
+ Project: (1), (2), (3)
Join Filter: (((3) = (1)) AND ((3) = (2)))
-> Nested Loop
- Output: (1), (2)
+ Project: (1), (2)
-> Result
- Output: 1
+ Project: 1
-> Result
- Output: 2
+ Project: 2
-> Result
- Output: 3
+ Project: 3
(11 rows)
-- check dummy rels with lateral references (bug #15694)
@@ -5549,25 +5549,25 @@ select * from int8_tbl i8 left join lateral
QUERY PLAN
--------------------------------------
Nested Loop Left Join
- Output: i8.q1, i8.q2, f1, (i8.q2)
+ Project: i8.q1, i8.q2, f1, (i8.q2)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Result
- Output: f1, i8.q2
+ Project: f1, i8.q2
One-Time Filter: false
(7 rows)
explain (verbose, costs off)
select * from int8_tbl i8 left join lateral
(select *, i8.q2 from int4_tbl i1, int4_tbl i2 where false) ss on true;
- QUERY PLAN
------------------------------------------
+ QUERY PLAN
+------------------------------------------
Nested Loop Left Join
- Output: i8.q1, i8.q2, f1, f1, (i8.q2)
+ Project: i8.q1, i8.q2, f1, f1, (i8.q2)
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Result
- Output: f1, f1, i8.q2
+ Project: f1, f1, i8.q2
One-Time Filter: false
(7 rows)
@@ -5600,18 +5600,18 @@ select * from
QUERY PLAN
----------------------------------------------------------------------
Nested Loop
- Output: "*VALUES*".column1, "*VALUES*".column2, int4_tbl.f1
+ Project: "*VALUES*".column1, "*VALUES*".column2, int4_tbl.f1
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1, "*VALUES*".column2
-> Nested Loop Semi Join
- Output: int4_tbl.f1
+ Project: int4_tbl.f1
Join Filter: (int4_tbl.f1 = tenk1.unique1)
-> Seq Scan on public.int4_tbl
Output: int4_tbl.f1
-> Materialize
Output: tenk1.unique1
-> Index Scan using tenk1_unique2 on public.tenk1
- Output: tenk1.unique1
+ Project: tenk1.unique1
Index Cond: (tenk1.unique2 = "*VALUES*".column2)
(14 rows)
@@ -5636,14 +5636,14 @@ lateral (select * from int8_tbl t1,
where q2 = (select greatest(t1.q1,t2.q2))
and (select v.id=0)) offset 0) ss2) ss
where t1.q1 = ss.q2) ss0;
- QUERY PLAN
------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------
Nested Loop
- Output: "*VALUES*".column1, t1.q1, t1.q2, ss2.q1, ss2.q2
+ Project: "*VALUES*".column1, t1.q1, t1.q2, ss2.q1, ss2.q2
-> Seq Scan on public.int8_tbl t1
Output: t1.q1, t1.q2
-> Nested Loop
- Output: "*VALUES*".column1, ss2.q1, ss2.q2
+ Project: "*VALUES*".column1, ss2.q1, ss2.q2
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1
-> Subquery Scan on ss2
@@ -5654,14 +5654,14 @@ lateral (select * from int8_tbl t1,
Filter: (SubPlan 3)
SubPlan 3
-> Result
- Output: t3.q2
+ Project: t3.q2
One-Time Filter: $4
InitPlan 1 (returns $2)
-> Result
- Output: GREATEST($0, t2.q2)
+ Project: GREATEST($0, t2.q2)
InitPlan 2 (returns $4)
-> Result
- Output: ($3 = 0)
+ Project: ($3 = 0)
-> Seq Scan on public.int8_tbl t3
Output: t3.q1, t3.q2
Filter: (t3.q2 = $2)
@@ -5785,11 +5785,11 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
Output: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a
Sort Key: t1.a
-> Nested Loop Left Join
- Output: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a
+ Project: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a
-> Seq Scan on public.join_ut1 t1
Output: t1.a, t1.b, t1.c
-> Hash Join
- Output: t2.a, LEAST(t1.a, t2.a, t3.a)
+ Project: t2.a, LEAST(t1.a, t2.a, t3.a)
Hash Cond: (t3.b = t2.a)
-> Seq Scan on public.join_ut1 t3
Output: t3.a, t3.b, t3.c
@@ -5797,10 +5797,10 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
Output: t2.a
-> Append
-> Seq Scan on public.join_pt1p1p1 t2
- Output: t2.a
+ Project: t2.a
Filter: (t1.a = t2.a)
-> Seq Scan on public.join_pt1p2 t2_1
- Output: t2_1.a
+ Project: t2_1.a
Filter: (t1.a = t2_1.a)
(21 rows)
@@ -5869,7 +5869,7 @@ select * from j1 inner join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -5886,7 +5886,7 @@ select * from j1 inner join j2 on j1.id > j2.id;
QUERY PLAN
-----------------------------------
Nested Loop
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Join Filter: (j1.id > j2.id)
-> Seq Scan on public.j1
Output: j1.id
@@ -5902,7 +5902,7 @@ select * from j1 inner join j3 on j1.id = j3.id;
QUERY PLAN
-----------------------------------
Hash Join
- Output: j1.id, j3.id
+ Project: j1.id, j3.id
Inner Unique: true
Hash Cond: (j3.id = j1.id)
-> Seq Scan on public.j3
@@ -5919,7 +5919,7 @@ select * from j1 left join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Left Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -5936,7 +5936,7 @@ select * from j1 right join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Left Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j2.id = j1.id)
-> Seq Scan on public.j2
@@ -5953,7 +5953,7 @@ select * from j1 full join j2 on j1.id = j2.id;
QUERY PLAN
-----------------------------------
Hash Full Join
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -5970,7 +5970,7 @@ select * from j1 cross join j2;
QUERY PLAN
-----------------------------------
Nested Loop
- Output: j1.id, j2.id
+ Project: j1.id, j2.id
-> Seq Scan on public.j1
Output: j1.id
-> Materialize
@@ -5985,7 +5985,7 @@ select * from j1 natural join j2;
QUERY PLAN
-----------------------------------
Hash Join
- Output: j1.id
+ Project: j1.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
-> Seq Scan on public.j1
@@ -6003,7 +6003,7 @@ inner join (select distinct id from j3) j3 on j1.id = j3.id;
QUERY PLAN
-----------------------------------------
Nested Loop
- Output: j1.id, j3.id
+ Project: j1.id, j3.id
Inner Unique: true
Join Filter: (j1.id = j3.id)
-> Unique
@@ -6024,11 +6024,11 @@ inner join (select id from j3 group by id) j3 on j1.id = j3.id;
QUERY PLAN
-----------------------------------------
Nested Loop
- Output: j1.id, j3.id
+ Project: j1.id, j3.id
Inner Unique: true
Join Filter: (j1.id = j3.id)
-> Group
- Output: j3.id
+ Project: j3.id
Group Key: j3.id
-> Sort
Output: j3.id
@@ -6057,10 +6057,10 @@ analyze j3;
explain (verbose, costs off)
select * from j1
inner join j2 on j1.id1 = j2.id1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Join Filter: (j1.id1 = j2.id1)
-> Seq Scan on public.j2
Output: j2.id1, j2.id2
@@ -6075,7 +6075,7 @@ inner join j2 on j1.id1 = j2.id1 and j1.id2 = j2.id2;
QUERY PLAN
----------------------------------------------------------
Nested Loop
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Inner Unique: true
Join Filter: ((j1.id1 = j2.id1) AND (j1.id2 = j2.id2))
-> Seq Scan on public.j2
@@ -6089,10 +6089,10 @@ inner join j2 on j1.id1 = j2.id1 and j1.id2 = j2.id2;
explain (verbose, costs off)
select * from j1
inner join j2 on j1.id1 = j2.id1 where j1.id2 = 1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Join Filter: (j1.id1 = j2.id1)
-> Seq Scan on public.j1
Output: j1.id1, j1.id2
@@ -6105,10 +6105,10 @@ inner join j2 on j1.id1 = j2.id1 where j1.id2 = 1;
explain (verbose, costs off)
select * from j1
left join j2 on j1.id1 = j2.id1 where j1.id2 = 1;
- QUERY PLAN
-------------------------------------------
+ QUERY PLAN
+-------------------------------------------
Nested Loop Left Join
- Output: j1.id1, j1.id2, j2.id1, j2.id2
+ Project: j1.id1, j1.id2, j2.id1, j2.id2
Join Filter: (j1.id1 = j2.id1)
-> Seq Scan on public.j1
Output: j1.id1, j1.id2
@@ -6166,12 +6166,12 @@ where exists (select 1 from tenk1 t3
QUERY PLAN
---------------------------------------------------------------------------------
Nested Loop
- Output: t1.unique1, t2.hundred
+ Project: t1.unique1, t2.hundred
-> Hash Join
- Output: t1.unique1, t3.tenthous
+ Project: t1.unique1, t3.tenthous
Hash Cond: (t3.thousand = t1.unique1)
-> HashAggregate
- Output: t3.thousand, t3.tenthous
+ Project: t3.thousand, t3.tenthous
Group Key: t3.thousand, t3.tenthous
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1 t3
Output: t3.thousand, t3.tenthous
@@ -6198,9 +6198,9 @@ where exists (select 1 from j3
QUERY PLAN
------------------------------------------------------------------------
Nested Loop
- Output: t1.unique1, t2.hundred
+ Project: t1.unique1, t2.hundred
-> Nested Loop
- Output: t1.unique1, j3.tenthous
+ Project: t1.unique1, j3.tenthous
-> Index Only Scan using onek_unique1 on public.onek t1
Output: t1.unique1
Index Cond: (t1.unique1 < 1)
diff --git a/src/test/regress/expected/join_hash.out b/src/test/regress/expected/join_hash.out
index 3a91c144a27..4e405ebbd76 100644
--- a/src/test/regress/expected/join_hash.out
+++ b/src/test/regress/expected/join_hash.out
@@ -913,36 +913,36 @@ WHERE
AND (SELECT hjtest_1.b * 5) < 50
AND (SELECT hjtest_2.c * 5) < 55
AND hjtest_1.a <> hjtest_2.b;
- QUERY PLAN
-------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------
Hash Join
- Output: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
+ Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: ((hjtest_1.id = (SubPlan 1)) AND ((SubPlan 2) = (SubPlan 3)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
-> Seq Scan on public.hjtest_1
- Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
+ Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
SubPlan 4
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
-> Hash
Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
-> Seq Scan on public.hjtest_2
- Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
+ Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
SubPlan 5
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
SubPlan 1
-> Result
- Output: 1
+ Project: 1
One-Time Filter: (hjtest_2.id = 1)
SubPlan 3
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
SubPlan 2
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
(28 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
@@ -967,36 +967,36 @@ WHERE
AND (SELECT hjtest_1.b * 5) < 50
AND (SELECT hjtest_2.c * 5) < 55
AND hjtest_1.a <> hjtest_2.b;
- QUERY PLAN
-------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------
Hash Join
- Output: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
+ Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: (((SubPlan 1) = hjtest_1.id) AND ((SubPlan 3) = (SubPlan 2)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
-> Seq Scan on public.hjtest_2
- Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
+ Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
SubPlan 5
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
-> Hash
Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
-> Seq Scan on public.hjtest_1
- Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
+ Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
SubPlan 4
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
SubPlan 2
-> Result
- Output: (hjtest_1.b * 5)
+ Project: (hjtest_1.b * 5)
SubPlan 1
-> Result
- Output: 1
+ Project: 1
One-Time Filter: (hjtest_2.id = 1)
SubPlan 3
-> Result
- Output: (hjtest_2.c * 5)
+ Project: (hjtest_2.c * 5)
(28 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index c18f547cbd3..5b247e74b77 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -316,12 +316,12 @@ create temp sequence testseq;
explain (verbose, costs off)
select unique1, unique2, nextval('testseq')
from tenk1 order by unique2 limit 10;
- QUERY PLAN
-----------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------
Limit
Output: unique1, unique2, (nextval('testseq'::regclass))
-> Index Scan using tenk1_unique2 on public.tenk1
- Output: unique1, unique2, nextval('testseq'::regclass)
+ Project: unique1, unique2, nextval('testseq'::regclass)
(4 rows)
select unique1, unique2, nextval('testseq')
@@ -349,17 +349,17 @@ select currval('testseq');
explain (verbose, costs off)
select unique1, unique2, nextval('testseq')
from tenk1 order by tenthous limit 10;
- QUERY PLAN
---------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Limit
Output: unique1, unique2, (nextval('testseq'::regclass)), tenthous
-> Result
- Output: unique1, unique2, nextval('testseq'::regclass), tenthous
+ Project: unique1, unique2, nextval('testseq'::regclass), tenthous
-> Sort
Output: unique1, unique2, tenthous
Sort Key: tenk1.tenthous
-> Seq Scan on public.tenk1
- Output: unique1, unique2, tenthous
+ Project: unique1, unique2, tenthous
(9 rows)
select unique1, unique2, nextval('testseq')
@@ -423,7 +423,7 @@ select unique1, unique2, generate_series(1,10)
Output: unique1, unique2, tenthous
Sort Key: tenk1.tenthous
-> Seq Scan on public.tenk1
- Output: unique1, unique2, tenthous
+ Project: unique1, unique2, tenthous
(9 rows)
select unique1, unique2, generate_series(1,10)
@@ -483,12 +483,12 @@ order by s2 desc;
explain (verbose, costs off)
select sum(tenthous) as s1, sum(tenthous) + random()*0 as s2
from tenk1 group by thousand order by thousand limit 3;
- QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------------------------------------------------------
Limit
Output: (sum(tenthous)), (((sum(tenthous))::double precision + (random() * '0'::double precision))), thousand
-> GroupAggregate
- Output: sum(tenthous), ((sum(tenthous))::double precision + (random() * '0'::double precision)), thousand
+ Project: sum(tenthous), ((sum(tenthous))::double precision + (random() * '0'::double precision)), thousand
Group Key: tenk1.thousand
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1
Output: thousand, tenthous
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
index e85b29455e5..92421090755 100644
--- a/src/test/regress/expected/plpgsql.out
+++ b/src/test/regress/expected/plpgsql.out
@@ -4832,9 +4832,9 @@ select i, a from
QUERY PLAN
-----------------------------------------------------------------
Nested Loop
- Output: i.i, (returns_rw_array(1))
+ Project: i.i, (returns_rw_array(1))
-> Result
- Output: returns_rw_array(1)
+ Project: returns_rw_array(1)
-> Function Scan on public.consumes_rw_array i
Output: i.i
Function Call: consumes_rw_array((returns_rw_array(1)))
@@ -4853,7 +4853,7 @@ select consumes_rw_array(a), a from returns_rw_array(1) a;
QUERY PLAN
--------------------------------------------
Function Scan on public.returns_rw_array a
- Output: consumes_rw_array(a), a
+ Project: consumes_rw_array(a), a
Function Call: returns_rw_array(1)
(3 rows)
@@ -4866,10 +4866,10 @@ select consumes_rw_array(a), a from returns_rw_array(1) a;
explain (verbose, costs off)
select consumes_rw_array(a), a from
(values (returns_rw_array(1)), (returns_rw_array(2))) v(a);
- QUERY PLAN
----------------------------------------------------------------------
+ QUERY PLAN
+----------------------------------------------------------------------
Values Scan on "*VALUES*"
- Output: consumes_rw_array("*VALUES*".column1), "*VALUES*".column1
+ Project: consumes_rw_array("*VALUES*".column1), "*VALUES*".column1
(2 rows)
select consumes_rw_array(a), a from
@@ -5207,7 +5207,7 @@ UPDATE transition_table_base
SET val = '*' || val || '*'
WHERE id BETWEEN 2 AND 3;
INFO: Hash Full Join
- Output: COALESCE(ot.id, nt.id), ot.val, nt.val
+ Project: COALESCE(ot.id, nt.id), ot.val, nt.val
Hash Cond: (ot.id = nt.id)
-> Named Tuplestore Scan
Output: ot.id, ot.val
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index 36a59291139..175e568ab40 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2003,7 +2003,7 @@ select x from int8_tbl, extractq2(int8_tbl) f(x);
QUERY PLAN
------------------------------------------
Nested Loop
- Output: f.x
+ Project: f.x
-> Seq Scan on public.int8_tbl
Output: int8_tbl.q1, int8_tbl.q2
-> Function Scan on f
@@ -2029,11 +2029,11 @@ select x from int8_tbl, extractq2_2(int8_tbl) f(x);
QUERY PLAN
-----------------------------------
Nested Loop
- Output: ((int8_tbl.*).q2)
+ Project: ((int8_tbl.*).q2)
-> Seq Scan on public.int8_tbl
- Output: int8_tbl.*
+ Project: int8_tbl.*
-> Result
- Output: (int8_tbl.*).q2
+ Project: (int8_tbl.*).q2
(6 rows)
select x from int8_tbl, extractq2_2(int8_tbl) f(x);
@@ -2055,7 +2055,7 @@ select x from int8_tbl, extractq2_2_opt(int8_tbl) f(x);
QUERY PLAN
-----------------------------
Seq Scan on public.int8_tbl
- Output: int8_tbl.q2
+ Project: int8_tbl.q2
(2 rows)
select x from int8_tbl, extractq2_2_opt(int8_tbl) f(x);
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index d01769299e4..e2ae42f78ac 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -3972,12 +3972,12 @@ INSERT INTO rls_tbl
--------------------------------------------------------------------
Insert on regress_rls_schema.rls_tbl
-> Subquery Scan on ss
- Output: ss.b, ss.c, NULL::integer
+ Project: ss.b, ss.c, NULL::integer
-> Sort
Output: rls_tbl_1.b, rls_tbl_1.c, rls_tbl_1.a
Sort Key: rls_tbl_1.a
-> Seq Scan on regress_rls_schema.rls_tbl rls_tbl_1
- Output: rls_tbl_1.b, rls_tbl_1.c, rls_tbl_1.a
+ Project: rls_tbl_1.b, rls_tbl_1.c, rls_tbl_1.a
Filter: (rls_tbl_1.* >= '(1,1,1)'::record)
(9 rows)
diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out
index 2a273f84049..b8e902570fd 100644
--- a/src/test/regress/expected/rowtypes.out
+++ b/src/test/regress/expected/rowtypes.out
@@ -1166,10 +1166,10 @@ explain (verbose, costs off)
select r, r is null as isnull, r is not null as isnotnull
from (values (1,row(1,2)), (1,row(null,null)), (1,null),
(null,row(1,2)), (null,row(null,null)), (null,null) ) r(a,b);
- QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Values Scan on "*VALUES*"
- Output: ROW("*VALUES*".column1, "*VALUES*".column2), (("*VALUES*".column1 IS NULL) AND ("*VALUES*".column2 IS NOT DISTINCT FROM NULL)), (("*VALUES*".column1 IS NOT NULL) AND ("*VALUES*".column2 IS DISTINCT FROM NULL))
+ Project: ROW("*VALUES*".column1, "*VALUES*".column2), (("*VALUES*".column1 IS NULL) AND ("*VALUES*".column2 IS NOT DISTINCT FROM NULL)), (("*VALUES*".column1 IS NOT NULL) AND ("*VALUES*".column2 IS DISTINCT FROM NULL))
(2 rows)
select r, r is null as isnull, r is not null as isnotnull
@@ -1193,7 +1193,7 @@ select r, r is null as isnull, r is not null as isnotnull from r;
QUERY PLAN
----------------------------------------------------------
CTE Scan on r
- Output: r.*, (r.* IS NULL), (r.* IS NOT NULL)
+ Project: r.*, (r.* IS NULL), (r.* IS NOT NULL)
CTE r
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1, "*VALUES*".column2
diff --git a/src/test/regress/expected/select_distinct.out b/src/test/regress/expected/select_distinct.out
index f3696c6d1de..fc93b33ee2b 100644
--- a/src/test/regress/expected/select_distinct.out
+++ b/src/test/regress/expected/select_distinct.out
@@ -130,15 +130,15 @@ SELECT DISTINCT p.age FROM person* p ORDER BY age using >;
EXPLAIN (VERBOSE, COSTS OFF)
SELECT count(*) FROM
(SELECT DISTINCT two, four, two FROM tenk1) ss;
- QUERY PLAN
---------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------
Aggregate
- Output: count(*)
+ Project: count(*)
-> HashAggregate
- Output: tenk1.two, tenk1.four, tenk1.two
+ Project: tenk1.two, tenk1.four, tenk1.two
Group Key: tenk1.two, tenk1.four, tenk1.two
-> Seq Scan on public.tenk1
- Output: tenk1.two, tenk1.four, tenk1.two
+ Project: tenk1.two, tenk1.four, tenk1.two
(7 rows)
SELECT count(*) FROM
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 0eca76cb41e..3c03b171707 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -212,10 +212,10 @@ select sp_parallel_restricted(unique1) from tenk1
Output: (sp_parallel_restricted(unique1))
Sort Key: (sp_parallel_restricted(tenk1.unique1))
-> Gather
- Output: sp_parallel_restricted(unique1)
+ Project: sp_parallel_restricted(unique1)
Workers Planned: 4
-> Parallel Seq Scan on public.tenk1
- Output: unique1
+ Project: unique1
Filter: (tenk1.stringu1 = 'GRAAAA'::name)
(9 rows)
@@ -649,12 +649,12 @@ explain (costs off, verbose)
Output: ten, (sp_simple_func(ten))
Workers Planned: 4
-> Result
- Output: ten, sp_simple_func(ten)
+ Project: ten, sp_simple_func(ten)
-> Sort
Output: ten
Sort Key: tenk1.ten
-> Parallel Seq Scan on public.tenk1
- Output: ten
+ Project: ten
Filter: (tenk1.ten < 100)
(11 rows)
@@ -965,18 +965,18 @@ explain (costs off, verbose)
QUERY PLAN
----------------------------------------------------------------------------------------------
Aggregate
- Output: count(*)
+ Project: count(*)
-> Hash Semi Join
Hash Cond: ((a.unique1 = b.unique1) AND (a.two = (row_number() OVER (?))))
-> Gather
Output: a.unique1, a.two
Workers Planned: 4
-> Parallel Seq Scan on public.tenk1 a
- Output: a.unique1, a.two
+ Project: a.unique1, a.two
-> Hash
Output: b.unique1, (row_number() OVER (?))
-> WindowAgg
- Output: b.unique1, row_number() OVER (?)
+ Project: b.unique1, row_number() OVER (?)
-> Gather
Output: b.unique1
Workers Planned: 4
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index ee9c5db0d51..90fe9fe9802 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -233,23 +233,23 @@ SELECT *, pg_typeof(f1) FROM
-- ... unless there's context to suggest differently
explain (verbose, costs off) select '42' union all select '43';
- QUERY PLAN
-----------------------------
+ QUERY PLAN
+-----------------------------
Append
-> Result
- Output: '42'::text
+ Project: '42'::text
-> Result
- Output: '43'::text
+ Project: '43'::text
(5 rows)
explain (verbose, costs off) select '42' union all select 43;
- QUERY PLAN
---------------------
+ QUERY PLAN
+---------------------
Append
-> Result
- Output: 42
+ Project: 42
-> Result
- Output: 43
+ Project: 43
(5 rows)
-- check materialization of an initplan reference (bug #14524)
@@ -258,15 +258,15 @@ select 1 = all (select (select 1));
QUERY PLAN
-----------------------------------
Result
- Output: (SubPlan 2)
+ Project: (SubPlan 2)
SubPlan 2
-> Materialize
Output: ($0)
InitPlan 1 (returns $0)
-> Result
- Output: 1
+ Project: 1
-> Result
- Output: $0
+ Project: $0
(10 rows)
select 1 = all (select (select 1));
@@ -770,16 +770,16 @@ select * from outer_text where (f1, f2) not in (select * from inner_text);
--
explain (verbose, costs off)
select 'foo'::text in (select 'bar'::name union all select 'bar'::name);
- QUERY PLAN
--------------------------------------
+ QUERY PLAN
+--------------------------------------
Result
- Output: (hashed SubPlan 1)
+ Project: (hashed SubPlan 1)
SubPlan 1
-> Append
-> Result
- Output: 'bar'::name
+ Project: 'bar'::name
-> Result
- Output: 'bar'::name
+ Project: 'bar'::name
(8 rows)
select 'foo'::text in (select 'bar'::name union all select 'bar'::name);
@@ -818,27 +818,27 @@ explain (verbose, costs off)
QUERY PLAN
---------------------------
Values Scan on "*VALUES*"
- Output: $0, $1
+ Project: $0, $1
InitPlan 1 (returns $0)
-> Result
- Output: now()
+ Project: now()
InitPlan 2 (returns $1)
-> Result
- Output: now()
+ Project: now()
(8 rows)
explain (verbose, costs off)
select x, x from
(select (select random()) as x from (values(1),(2)) v(y)) ss;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+-----------------------------------
Subquery Scan on ss
- Output: ss.x, ss.x
+ Project: ss.x, ss.x
-> Values Scan on "*VALUES*"
- Output: $0
+ Project: $0
InitPlan 1 (returns $0)
-> Result
- Output: random()
+ Project: random()
(7 rows)
explain (verbose, costs off)
@@ -847,14 +847,14 @@ explain (verbose, costs off)
QUERY PLAN
----------------------------------------------------------------------
Values Scan on "*VALUES*"
- Output: (SubPlan 1), (SubPlan 2)
+ Project: (SubPlan 1), (SubPlan 2)
SubPlan 1
-> Result
- Output: now()
+ Project: now()
One-Time Filter: ("*VALUES*".column1 = "*VALUES*".column1)
SubPlan 2
-> Result
- Output: now()
+ Project: now()
One-Time Filter: ("*VALUES*".column1 = "*VALUES*".column1)
(10 rows)
@@ -864,12 +864,12 @@ explain (verbose, costs off)
QUERY PLAN
----------------------------------------------------------------------------
Subquery Scan on ss
- Output: ss.x, ss.x
+ Project: ss.x, ss.x
-> Values Scan on "*VALUES*"
- Output: (SubPlan 1)
+ Project: (SubPlan 1)
SubPlan 1
-> Result
- Output: random()
+ Project: random()
One-Time Filter: ("*VALUES*".column1 = "*VALUES*".column1)
(8 rows)
@@ -936,7 +936,7 @@ select * from int4_tbl where
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop Semi Join
- Output: int4_tbl.f1
+ Project: int4_tbl.f1
Join Filter: (CASE WHEN (hashed SubPlan 1) THEN int4_tbl.f1 ELSE NULL::integer END = b.ten)
-> Seq Scan on public.int4_tbl
Output: int4_tbl.f1
@@ -961,10 +961,10 @@ select * from int4_tbl where
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,50) / 10 g from int4_tbl i group by f1);
- QUERY PLAN
--------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------
Nested Loop Semi Join
- Output: o.f1
+ Project: o.f1
Join Filter: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
@@ -974,11 +974,11 @@ select * from int4_tbl o where (f1, f1) in
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
-> Result
- Output: i.f1, ((generate_series(1, 50)) / 10)
+ Project: i.f1, ((generate_series(1, 50)) / 10)
-> ProjectSet
Output: generate_series(1, 50), i.f1
-> HashAggregate
- Output: i.f1
+ Project: i.f1
Group Key: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
@@ -1220,7 +1220,7 @@ select * from x where f1 = 1;
QUERY PLAN
----------------------------------
Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1
+ Project: subselect_tbl.f1
Filter: (subselect_tbl.f1 = 1)
(3 rows)
@@ -1235,17 +1235,17 @@ select * from x where f1 = 1;
Filter: (x.f1 = 1)
CTE x
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1
+ Project: subselect_tbl.f1
(6 rows)
-- Stable functions are safe to inline
explain (verbose, costs off)
with x as (select * from (select f1, now() from subselect_tbl) ss)
select * from x where f1 = 1;
- QUERY PLAN
------------------------------------
+ QUERY PLAN
+------------------------------------
Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, now()
+ Project: subselect_tbl.f1, now()
Filter: (subselect_tbl.f1 = 1)
(3 rows)
@@ -1253,46 +1253,46 @@ select * from x where f1 = 1;
explain (verbose, costs off)
with x as (select * from (select f1, random() from subselect_tbl) ss)
select * from x where f1 = 1;
- QUERY PLAN
-----------------------------------------------
+ QUERY PLAN
+-----------------------------------------------
CTE Scan on x
Output: x.f1, x.random
Filter: (x.f1 = 1)
CTE x
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, random()
+ Project: subselect_tbl.f1, random()
(6 rows)
-- SELECT FOR UPDATE cannot be inlined
explain (verbose, costs off)
with x as (select * from (select f1 from subselect_tbl for update) ss)
select * from x where f1 = 1;
- QUERY PLAN
---------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
CTE Scan on x
Output: x.f1
Filter: (x.f1 = 1)
CTE x
-> Subquery Scan on ss
- Output: ss.f1
+ Project: ss.f1
-> LockRows
Output: subselect_tbl.f1, subselect_tbl.ctid
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, subselect_tbl.ctid
+ Project: subselect_tbl.f1, subselect_tbl.ctid
(10 rows)
-- Multiply-referenced CTEs are inlined only when requested
explain (verbose, costs off)
with x as (select * from (select f1, now() as n from subselect_tbl) ss)
select * from x, x x2 where x.n = x2.n;
- QUERY PLAN
--------------------------------------------
+ QUERY PLAN
+--------------------------------------------
Merge Join
- Output: x.f1, x.n, x2.f1, x2.n
+ Project: x.f1, x.n, x2.f1, x2.n
Merge Cond: (x.n = x2.n)
CTE x
-> Seq Scan on public.subselect_tbl
- Output: subselect_tbl.f1, now()
+ Project: subselect_tbl.f1, now()
-> Sort
Output: x.f1, x.n
Sort Key: x.n
@@ -1311,16 +1311,16 @@ select * from x, x x2 where x.n = x2.n;
QUERY PLAN
----------------------------------------------------------------------------
Result
- Output: subselect_tbl.f1, now(), subselect_tbl_1.f1, now()
+ Project: subselect_tbl.f1, now(), subselect_tbl_1.f1, now()
One-Time Filter: (now() = now())
-> Nested Loop
- Output: subselect_tbl.f1, subselect_tbl_1.f1
+ Project: subselect_tbl.f1, subselect_tbl_1.f1
-> Seq Scan on public.subselect_tbl
Output: subselect_tbl.f1, subselect_tbl.f2, subselect_tbl.f3
-> Materialize
Output: subselect_tbl_1.f1
-> Seq Scan on public.subselect_tbl subselect_tbl_1
- Output: subselect_tbl_1.f1
+ Project: subselect_tbl_1.f1
(11 rows)
-- Multiply-referenced CTEs can't be inlined if they contain outer self-refs
@@ -1341,7 +1341,7 @@ select * from x;
-> Values Scan on "*VALUES*"
Output: "*VALUES*".column1
-> Nested Loop
- Output: (z.a || z1.a)
+ Project: (z.a || z1.a)
Join Filter: (length((z.a || z1.a)) < 5)
CTE z
-> WorkTable Scan on x x_1
@@ -1450,10 +1450,10 @@ select * from (with y as (select * from x) select * from y) ss;
explain (verbose, costs off)
with x as (select 1 as y)
select * from (with x as (select 2 as y) select * from x) ss;
- QUERY PLAN
--------------
+ QUERY PLAN
+--------------
Result
- Output: 2
+ Project: 2
(2 rows)
-- Row marks are not pushed into CTEs
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index d47b5f6ec57..9fd94e24664 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -103,10 +103,10 @@ SELECT unnest(ARRAY[1, 2]) FROM few WHERE false;
explain (verbose, costs off)
SELECT * FROM few f1,
(SELECT unnest(ARRAY[1,2]) FROM few f2 WHERE false OFFSET 0) ss;
- QUERY PLAN
-------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------
Result
- Output: f1.id, f1.dataa, f1.datab, ss.unnest
+ Project: f1.id, f1.dataa, f1.datab, ss.unnest
One-Time Filter: false
(3 rows)
@@ -647,10 +647,10 @@ SELECT |@|ARRAY[1,2,3];
-- Some fun cases involving duplicate SRF calls
explain (verbose, costs off)
select generate_series(1,3) as x, generate_series(1,3) + 1 as xp1;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Result
- Output: (generate_series(1, 3)), ((generate_series(1, 3)) + 1)
+ Project: (generate_series(1, 3)), ((generate_series(1, 3)) + 1)
-> ProjectSet
Output: generate_series(1, 3)
-> Result
@@ -666,13 +666,13 @@ select generate_series(1,3) as x, generate_series(1,3) + 1 as xp1;
explain (verbose, costs off)
select generate_series(1,3)+1 order by generate_series(1,3);
- QUERY PLAN
-------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------
Sort
Output: (((generate_series(1, 3)) + 1)), (generate_series(1, 3))
Sort Key: (generate_series(1, 3))
-> Result
- Output: ((generate_series(1, 3)) + 1), (generate_series(1, 3))
+ Project: ((generate_series(1, 3)) + 1), (generate_series(1, 3))
-> ProjectSet
Output: generate_series(1, 3)
-> Result
@@ -689,10 +689,10 @@ select generate_series(1,3)+1 order by generate_series(1,3);
-- Check that SRFs of same nesting level run in lockstep
explain (verbose, costs off)
select generate_series(1,3) as x, generate_series(3,6) + 1 as y;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Result
- Output: (generate_series(1, 3)), ((generate_series(3, 6)) + 1)
+ Project: (generate_series(1, 3)), ((generate_series(3, 6)) + 1)
-> ProjectSet
Output: generate_series(1, 3), generate_series(3, 6)
-> Result
diff --git a/src/test/regress/expected/updatable_views.out b/src/test/regress/expected/updatable_views.out
index 8443c24f18b..1060e31408b 100644
--- a/src/test/regress/expected/updatable_views.out
+++ b/src/test/regress/expected/updatable_views.out
@@ -1262,12 +1262,12 @@ SELECT * FROM rw_view1;
(4 rows)
EXPLAIN (verbose, costs off) UPDATE rw_view1 SET b = b + 1 RETURNING *;
- QUERY PLAN
--------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------
Update on public.base_tbl
Output: base_tbl.a, base_tbl.b
-> Seq Scan on public.base_tbl
- Output: base_tbl.a, (base_tbl.b + 1), base_tbl.ctid
+ Project: base_tbl.a, (base_tbl.b + 1), base_tbl.ctid
(4 rows)
UPDATE rw_view1 SET b = b + 1 RETURNING *;
@@ -2288,7 +2288,7 @@ UPDATE v1 SET a=100 WHERE snoop(a) AND leakproof(a) AND a < 7 AND a != 6;
Update on public.t12
Update on public.t111
-> Index Scan using t1_a_idx on public.t1
- Output: 100, t1.b, t1.c, t1.ctid
+ Project: 100, t1.b, t1.c, t1.ctid
Index Cond: ((t1.a > 5) AND (t1.a < 7))
Filter: ((t1.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t1.a) AND leakproof(t1.a))
SubPlan 1
@@ -2300,19 +2300,19 @@ UPDATE v1 SET a=100 WHERE snoop(a) AND leakproof(a) AND a < 7 AND a != 6;
SubPlan 2
-> Append
-> Seq Scan on public.t12 t12_2
- Output: t12_2.a
+ Project: t12_2.a
-> Seq Scan on public.t111 t111_2
- Output: t111_2.a
+ Project: t111_2.a
-> Index Scan using t11_a_idx on public.t11
- Output: 100, t11.b, t11.c, t11.d, t11.ctid
+ Project: 100, t11.b, t11.c, t11.d, t11.ctid
Index Cond: ((t11.a > 5) AND (t11.a < 7))
Filter: ((t11.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t11.a) AND leakproof(t11.a))
-> Index Scan using t12_a_idx on public.t12
- Output: 100, t12.b, t12.c, t12.e, t12.ctid
+ Project: 100, t12.b, t12.c, t12.e, t12.ctid
Index Cond: ((t12.a > 5) AND (t12.a < 7))
Filter: ((t12.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t12.a) AND leakproof(t12.a))
-> Index Scan using t111_a_idx on public.t111
- Output: 100, t111.b, t111.c, t111.d, t111.e, t111.ctid
+ Project: 100, t111.b, t111.c, t111.d, t111.e, t111.ctid
Index Cond: ((t111.a > 5) AND (t111.a < 7))
Filter: ((t111.a <> 6) AND (alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t111.a) AND leakproof(t111.a))
(33 rows)
@@ -2338,7 +2338,7 @@ UPDATE v1 SET a=a+1 WHERE snoop(a) AND leakproof(a) AND a = 8;
Update on public.t12
Update on public.t111
-> Index Scan using t1_a_idx on public.t1
- Output: (t1.a + 1), t1.b, t1.c, t1.ctid
+ Project: (t1.a + 1), t1.b, t1.c, t1.ctid
Index Cond: ((t1.a > 5) AND (t1.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t1.a) AND leakproof(t1.a))
SubPlan 1
@@ -2350,19 +2350,19 @@ UPDATE v1 SET a=a+1 WHERE snoop(a) AND leakproof(a) AND a = 8;
SubPlan 2
-> Append
-> Seq Scan on public.t12 t12_2
- Output: t12_2.a
+ Project: t12_2.a
-> Seq Scan on public.t111 t111_2
- Output: t111_2.a
+ Project: t111_2.a
-> Index Scan using t11_a_idx on public.t11
- Output: (t11.a + 1), t11.b, t11.c, t11.d, t11.ctid
+ Project: (t11.a + 1), t11.b, t11.c, t11.d, t11.ctid
Index Cond: ((t11.a > 5) AND (t11.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t11.a) AND leakproof(t11.a))
-> Index Scan using t12_a_idx on public.t12
- Output: (t12.a + 1), t12.b, t12.c, t12.e, t12.ctid
+ Project: (t12.a + 1), t12.b, t12.c, t12.e, t12.ctid
Index Cond: ((t12.a > 5) AND (t12.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t12.a) AND leakproof(t12.a))
-> Index Scan using t111_a_idx on public.t111
- Output: (t111.a + 1), t111.b, t111.c, t111.d, t111.e, t111.ctid
+ Project: (t111.a + 1), t111.b, t111.c, t111.d, t111.e, t111.ctid
Index Cond: ((t111.a > 5) AND (t111.a = 8))
Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) AND snoop(t111.a) AND leakproof(t111.a))
(33 rows)
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index a24ecd61df8..59419ec692d 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -172,17 +172,17 @@ EXPLAIN (VERBOSE, COSTS OFF)
UPDATE update_test t
SET (a, b) = (SELECT b, a FROM update_test s WHERE s.a = t.a)
WHERE CURRENT_USER = SESSION_USER;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Update on public.update_test t
-> Result
- Output: $1, $2, t.c, (SubPlan 1 (returns $1,$2)), t.ctid
+ Project: $1, $2, t.c, (SubPlan 1 (returns $1,$2)), t.ctid
One-Time Filter: (CURRENT_USER = SESSION_USER)
-> Seq Scan on public.update_test t
- Output: t.c, t.a, t.ctid
+ Project: t.c, t.a, t.ctid
SubPlan 1 (returns $1,$2)
-> Seq Scan on public.update_test s
- Output: s.b, s.a
+ Project: s.b, s.a
Filter: (s.a = t.a)
(10 rows)
diff --git a/src/test/regress/expected/with.out b/src/test/regress/expected/with.out
index 2a2085556bb..05d2847dc84 100644
--- a/src/test/regress/expected/with.out
+++ b/src/test/regress/expected/with.out
@@ -2181,8 +2181,8 @@ SELECT * FROM parent;
EXPLAIN (VERBOSE, COSTS OFF)
WITH wcte AS ( INSERT INTO int8_tbl VALUES ( 42, 47 ) RETURNING q2 )
DELETE FROM a USING wcte WHERE aa = q2;
- QUERY PLAN
-----------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------
Delete on public.a
Delete on public.a
Delete on public.b
@@ -2192,35 +2192,35 @@ DELETE FROM a USING wcte WHERE aa = q2;
-> Insert on public.int8_tbl
Output: int8_tbl.q2
-> Result
- Output: '42'::bigint, '47'::bigint
+ Project: '42'::bigint, '47'::bigint
-> Nested Loop
- Output: a.ctid, wcte.*
+ Project: a.ctid, wcte.*
Join Filter: (a.aa = wcte.q2)
-> Seq Scan on public.a
- Output: a.ctid, a.aa
+ Project: a.ctid, a.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
-> Nested Loop
- Output: b.ctid, wcte.*
+ Project: b.ctid, wcte.*
Join Filter: (b.aa = wcte.q2)
-> Seq Scan on public.b
- Output: b.ctid, b.aa
+ Project: b.ctid, b.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
-> Nested Loop
- Output: c.ctid, wcte.*
+ Project: c.ctid, wcte.*
Join Filter: (c.aa = wcte.q2)
-> Seq Scan on public.c
- Output: c.ctid, c.aa
+ Project: c.ctid, c.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
-> Nested Loop
- Output: d.ctid, wcte.*
+ Project: d.ctid, wcte.*
Join Filter: (d.aa = wcte.q2)
-> Seq Scan on public.d
- Output: d.ctid, d.aa
+ Project: d.ctid, d.aa
-> CTE Scan on wcte
- Output: wcte.*, wcte.q2
+ Project: wcte.*, wcte.q2
(38 rows)
-- error cases
diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out
index 55b65ef324d..d0b69103eb3 100644
--- a/src/test/regress/expected/xml.out
+++ b/src/test/regress/expected/xml.out
@@ -1137,7 +1137,7 @@ EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM xmltableview1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1313,7 +1313,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1333,7 +1333,7 @@ SELECT xmltable.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME="Japan"
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
+ Project: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1436,7 +1436,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
diff --git a/src/test/regress/expected/xml_2.out b/src/test/regress/expected/xml_2.out
index 04842602817..95d56b3324e 100644
--- a/src/test/regress/expected/xml_2.out
+++ b/src/test/regress/expected/xml_2.out
@@ -1117,7 +1117,7 @@ EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM xmltableview1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1293,7 +1293,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1313,7 +1313,7 @@ SELECT xmltable.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME="Japan"
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
+ Project: "xmltable"."COUNTRY_NAME", "xmltable"."REGION_ID"
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
@@ -1416,7 +1416,7 @@ SELECT xmltable.*
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop
- Output: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
+ Project: "xmltable".id, "xmltable"._id, "xmltable".country_name, "xmltable".country_id, "xmltable".region_id, "xmltable".size, "xmltable".unit, "xmltable".premier_name
-> Seq Scan on public.xmldata
Output: xmldata.data
-> Table Function Scan on "xmltable"
--
2.23.0.385.gbc12974a89
v2-0004-Add-EXPLAIN-option-jit_details-showing-per-expres.patchtext/x-diff; charset=us-asciiDownload
From 3744226db2b5a8864cfd05fd695d412364d17104 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 28 Oct 2019 16:44:18 -0700
Subject: [PATCH v2 4/8] Add EXPLAIN option jit_details showing per-expression
information about JIT.
This is useful both to understand where JIT is applied (and thus where
to improve), and to be able write regression tests to verify that we
can JIT compile specific parts of a query.
Note that currently the printed function names will make it harder to
use this for regression tests - a followup commit will improve that
angle.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 144 ++++++++++++++++++++++++++--
src/backend/executor/execExpr.c | 7 ++
src/backend/jit/llvm/llvmjit_expr.c | 9 ++
src/include/commands/explain.h | 1 +
src/include/executor/execExpr.h | 6 ++
src/include/nodes/execnodes.h | 5 +
6 files changed, 163 insertions(+), 9 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index ea6b39d5abb..3ccb76bdfd1 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -19,6 +19,7 @@
#include "commands/defrem.h"
#include "commands/prepare.h"
#include "executor/nodeHash.h"
+#include "executor/execExpr.h"
#include "foreign/fdwapi.h"
#include "jit/jit.h"
#include "nodes/extensible.h"
@@ -69,6 +70,7 @@ static void show_plan_tlist(PlanState *planstate, List *ancestors,
static void show_expression(Node *node, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
+static void show_jit_expr_details(ExprState *expr, ExplainState *es);
static void show_qual(List *qual, ExprState *expr, const char *qlabel,
PlanState *planstate, List *ancestors,
bool useprefix, ExplainState *es);
@@ -170,6 +172,8 @@ ExplainQuery(ParseState *pstate, ExplainStmt *stmt, const char *queryString,
timing_set = true;
es->timing = defGetBoolean(opt);
}
+ else if (strcmp(opt->defname, "jit_details") == 0)
+ es->jit_details = defGetBoolean(opt);
else if (strcmp(opt->defname, "summary") == 0)
{
summary_set = true;
@@ -560,12 +564,11 @@ ExplainOnePlan(PlannedStmt *plannedstmt, IntoClause *into, ExplainState *es,
ExplainPrintTriggers(es, queryDesc);
/*
- * Print info about JITing. Tied to es->costs because we don't want to
- * display this in regression tests, as it'd cause output differences
- * depending on build options. Might want to separate that out from COSTS
- * at a later stage.
+ * Print info about JITing. Tied to es->costs unless jit_details is set,
+ * because we don't want to display this in regression tests, as it'd
+ * cause output differences depending on build options.
*/
- if (es->costs)
+ if (es->costs || es->jit_details)
ExplainPrintJITSummary(es, queryDesc);
/*
@@ -2140,10 +2143,40 @@ show_plan_tlist(PlanState *planstate, List *ancestors, ExplainState *es)
}
/* Print results */
- if (planstate->ps_ProjInfo)
+ if (!planstate->ps_ProjInfo)
+ ExplainPropertyList("Output", result, es);
+ else if (!es->jit_details)
ExplainPropertyList("Project", result, es);
+ else if (es->format != EXPLAIN_FORMAT_TEXT)
+ {
+ ExplainOpenGroup("Project", "Project", true, es);
+
+ ExplainPropertyList("Expr", result, es);
+
+ if (planstate->ps_ProjInfo)
+ {
+ ExprState *expr = &planstate->ps_ProjInfo->pi_state;
+
+ show_jit_expr_details(expr, es);
+ }
+ ExplainCloseGroup("Project", "Project", true, es);
+ }
else
- ExplainPropertyList("Output", result, es);
+ {
+ ExplainPropertyList("Project", result, es);
+
+ if (planstate->ps_ProjInfo)
+ {
+ ExprState *expr = &planstate->ps_ProjInfo->pi_state;
+
+ /* XXX: remove \n, probably instead just open-code ExplainPropertyList */
+ es->str->len--;
+
+ appendStringInfoString(es->str, "; ");
+ show_jit_expr_details(expr, es);
+ appendStringInfoChar(es->str, '\n');
+ }
+ }
}
/*
@@ -2167,8 +2200,101 @@ show_expression(Node *node, ExprState *expr, const char *qlabel,
/* Deparse the expression */
exprstr = deparse_expression(node, context, useprefix, false);
- /* And add to es->str */
- ExplainPropertyText(qlabel, exprstr, es);
+ if (!es->jit_details)
+ ExplainPropertyText(qlabel, exprstr, es);
+ else if (es->format != EXPLAIN_FORMAT_TEXT)
+ {
+ ExplainOpenGroup(qlabel, qlabel, true, es);
+
+ ExplainPropertyText("Expr", exprstr, es);
+
+ if (expr != NULL)
+ show_jit_expr_details(expr, es);
+ ExplainCloseGroup(qlabel, qlabel, true, es);
+ }
+ else
+ {
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "%s: %s", qlabel, exprstr);
+
+ if (expr != NULL)
+ {
+ appendStringInfoString(es->str, "; ");
+
+ show_jit_expr_details(expr, es);
+ }
+
+ appendStringInfoChar(es->str, '\n');
+ }
+}
+
+static void
+show_jit_expr_details(ExprState *expr, ExplainState *es)
+{
+ if (expr == NULL)
+ return;
+
+ Assert(es->jit_details);
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ if (expr->flags & EEO_FLAG_JIT_EXPR)
+ appendStringInfo(es->str, "JIT-Expr: %s", expr->expr_funcname);
+ else
+ appendStringInfoString(es->str, "JIT-Expr: false");
+
+ /*
+ * Either show the function name for tuple deforming quoted in "", or
+ * false if JIT compilation was performed, but no code was generated
+ * for deforming the respective attribute.
+ */
+
+ if (expr->scan_funcname)
+ appendStringInfo(es->str, ", JIT-Deform-Scan: %s", expr->scan_funcname);
+ else if (expr->flags & EEO_FLAG_JIT_EXPR &&
+ expr->flags & EEO_FLAG_DEFORM_SCAN)
+ appendStringInfo(es->str, ", JIT-Deform-Scan: false");
+
+ if (expr->outer_funcname)
+ appendStringInfo(es->str, ", JIT-Deform-Outer: %s", expr->outer_funcname);
+ else if (expr->flags & EEO_FLAG_JIT_EXPR &&
+ expr->flags & EEO_FLAG_DEFORM_OUTER)
+ appendStringInfo(es->str, ", JIT-Deform-Outer: false");
+
+ if (expr->inner_funcname)
+ appendStringInfo(es->str, ", JIT-Deform-Inner: %s", expr->inner_funcname);
+ else if (expr->flags & EEO_FLAG_JIT_EXPR &&
+ expr->flags & (EEO_FLAG_DEFORM_INNER))
+ appendStringInfo(es->str, ", JIT-Deform-Inner: false");
+ }
+ else
+ {
+ if (expr->flags & EEO_FLAG_JIT_EXPR)
+ ExplainPropertyText("JIT-Expr", expr->expr_funcname, es);
+ else
+ ExplainPropertyBool("JIT-Expr", false, es);
+
+ if (expr->scan_funcname)
+ ExplainProperty("JIT-Deform-Scan", NULL, expr->scan_funcname, false, es);
+ else if (expr->flags & EEO_FLAG_DEFORM_SCAN)
+ ExplainProperty("JIT-Deform-Scan", NULL, "false", true, es);
+ else
+ ExplainProperty("JIT-Deform-Scan", NULL, "null", true, es);
+
+ if (expr->outer_funcname)
+ ExplainProperty("JIT-Deform-Outer", NULL, expr->outer_funcname, false, es);
+ else if (expr->flags & EEO_FLAG_DEFORM_OUTER)
+ ExplainProperty("JIT-Deform-Outer", NULL, "false", true, es);
+ else
+ ExplainProperty("JIT-Deform-Outer", NULL, "null", true, es);
+
+ if (expr->inner_funcname)
+ ExplainProperty("JIT-Deform-Inner", NULL, expr->inner_funcname, false, es);
+ else if (expr->flags & EEO_FLAG_DEFORM_INNER)
+ ExplainProperty("JIT-Deform-Inner", NULL, "false", true, es);
+ else
+ ExplainProperty("JIT-Deform-Inner", NULL, "null", true, es);
+ }
}
/*
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 7e486449eca..9005759cd06 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -2458,6 +2458,13 @@ ExecComputeSlotInfo(ExprState *state, ExprEvalStep *op)
if (op->d.fetch.fixed && op->d.fetch.kind == &TTSOpsVirtual)
return false;
+ if (opcode == EEOP_INNER_FETCHSOME)
+ state->flags |= EEO_FLAG_DEFORM_INNER;
+ else if (opcode == EEOP_OUTER_FETCHSOME)
+ state->flags |= EEO_FLAG_DEFORM_OUTER;
+ else if (opcode == EEOP_SCAN_FETCHSOME)
+ state->flags |= EEO_FLAG_DEFORM_SCAN;
+
return true;
}
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index 4ba8c78cbc9..be8d424c8d0 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -145,6 +145,7 @@ llvm_compile_expr(ExprState *state)
funcname = llvm_expand_funcname(context, "evalexpr");
context->base.instr.created_expr_functions++;
+ state->expr_funcname = funcname;
/* Create the signature and function */
{
@@ -336,6 +337,13 @@ llvm_compile_expr(ExprState *state)
LLVMBuildCall(b, l_jit_deform,
params, lengthof(params), "");
+
+ if (opcode == EEOP_INNER_FETCHSOME)
+ state->inner_funcname = pstrdup(LLVMGetValueName(l_jit_deform));
+ else if (opcode == EEOP_OUTER_FETCHSOME)
+ state->outer_funcname = pstrdup(LLVMGetValueName(l_jit_deform));
+ else
+ state->scan_funcname = pstrdup(LLVMGetValueName(l_jit_deform));
}
else
{
@@ -2462,6 +2470,7 @@ llvm_compile_expr(ExprState *state)
INSTR_TIME_SET_CURRENT(endtime);
INSTR_TIME_ACCUM_DIFF(context->base.instr.generation_counter,
endtime, starttime);
+ state->flags |= EEO_FLAG_JIT_EXPR;
return true;
}
diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h
index 8639891c164..5dbbeb3a3c3 100644
--- a/src/include/commands/explain.h
+++ b/src/include/commands/explain.h
@@ -36,6 +36,7 @@ typedef struct ExplainState
bool timing; /* print detailed node timing */
bool summary; /* print total planning and execution timing */
bool settings; /* print modified settings */
+ bool jit_details; /* print per-expression details about JIT */
ExplainFormat format; /* output format */
/* state for output formatting --- not reset for each new plan tree */
int indent; /* current indentation level */
diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h
index d21dbead0a2..5ebe50df888 100644
--- a/src/include/executor/execExpr.h
+++ b/src/include/executor/execExpr.h
@@ -26,6 +26,12 @@ struct SubscriptingRefState;
#define EEO_FLAG_INTERPRETER_INITIALIZED (1 << 1)
/* jump-threading is in use */
#define EEO_FLAG_DIRECT_THREADED (1 << 2)
+/* is expression jit compiled */
+#define EEO_FLAG_JIT_EXPR (1 << 3)
+/* does expression require tuple deforming */
+#define EEO_FLAG_DEFORM_INNER (1 << 4)
+#define EEO_FLAG_DEFORM_OUTER (1 << 5)
+#define EEO_FLAG_DEFORM_SCAN (1 << 6)
/* Typical API for out-of-line evaluation subroutines */
typedef void (*ExecEvalSubroutine) (ExprState *state,
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 44f76082e99..d0b290fb342 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -113,6 +113,11 @@ typedef struct ExprState
Datum *innermost_domainval;
bool *innermost_domainnull;
+
+ const char *expr_funcname;
+ const char *outer_funcname;
+ const char *inner_funcname;
+ const char *scan_funcname;
} ExprState;
--
2.23.0.385.gbc12974a89
v2-0005-jit-explain-remove-backend-lifetime-module-count-.patchtext/x-diff; charset=us-asciiDownload
From f5229dfa53ef219641e4c06c48f469ffcc0383c1 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 14:05:08 -0700
Subject: [PATCH v2 5/8] jit: explain: remove backend lifetime module count
from function name.
Also expand function name to include in which module the function is -
without that it's harder to analyze which functions were emitted
separately (a performance concern).
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 65 +++++++++++++++++++++++++++++-----
src/backend/jit/llvm/llvmjit.c | 18 +++++++---
src/include/jit/llvmjit.h | 5 ++-
3 files changed, 75 insertions(+), 13 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 3ccb76bdfd1..02455865d9f 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -2228,6 +2228,43 @@ show_expression(Node *node, ExprState *expr, const char *qlabel,
}
}
+/*
+ * To make JIT explain output reproducible, remove the module generation from
+ * function names. That makes it a bit harder to correlate with profiles etc,
+ * but reproducability is more important.
+ */
+static char *
+jit_funcname_for_display(const char *funcname)
+{
+ int func_counter; /* nth function in query */
+ size_t mod_num; /* nth module in query */
+ size_t mod_generation; /* nth module in backend */
+ int basename_end;
+ int matchcount = 0;
+
+ /*
+ * The pattern we need to match, see llvm_expand_funcname, is
+ * "%s_%zu_%d_mod_%zu". Find the fourth _ from the end, so a _ in the name
+ * is OK.
+ */
+ for (basename_end = strlen(funcname); basename_end >= 0; basename_end--)
+ {
+ if (funcname[basename_end] == '_' && ++matchcount == 4)
+ break;
+ }
+
+ /* couldn't parse, bail out */
+ if (matchcount != 4)
+ return pstrdup(funcname);
+
+ /* couldn't parse, bail out */
+ if (sscanf(funcname + basename_end, "_%zu_%d_mod_%zu",
+ &mod_num, &func_counter, &mod_generation) != 3)
+ return pstrdup(funcname);
+
+ return psprintf("%s_%zu_%d", pnstrdup(funcname, basename_end), mod_num, func_counter);
+}
+
static void
show_jit_expr_details(ExprState *expr, ExplainState *es)
{
@@ -2239,7 +2276,8 @@ show_jit_expr_details(ExprState *expr, ExplainState *es)
if (es->format == EXPLAIN_FORMAT_TEXT)
{
if (expr->flags & EEO_FLAG_JIT_EXPR)
- appendStringInfo(es->str, "JIT-Expr: %s", expr->expr_funcname);
+ appendStringInfo(es->str, "JIT-Expr: %s",
+ jit_funcname_for_display(expr->expr_funcname));
else
appendStringInfoString(es->str, "JIT-Expr: false");
@@ -2250,19 +2288,22 @@ show_jit_expr_details(ExprState *expr, ExplainState *es)
*/
if (expr->scan_funcname)
- appendStringInfo(es->str, ", JIT-Deform-Scan: %s", expr->scan_funcname);
+ appendStringInfo(es->str, ", JIT-Deform-Scan: %s",
+ jit_funcname_for_display(expr->scan_funcname));
else if (expr->flags & EEO_FLAG_JIT_EXPR &&
expr->flags & EEO_FLAG_DEFORM_SCAN)
appendStringInfo(es->str, ", JIT-Deform-Scan: false");
if (expr->outer_funcname)
- appendStringInfo(es->str, ", JIT-Deform-Outer: %s", expr->outer_funcname);
+ appendStringInfo(es->str, ", JIT-Deform-Outer: %s",
+ jit_funcname_for_display(expr->outer_funcname));
else if (expr->flags & EEO_FLAG_JIT_EXPR &&
expr->flags & EEO_FLAG_DEFORM_OUTER)
appendStringInfo(es->str, ", JIT-Deform-Outer: false");
if (expr->inner_funcname)
- appendStringInfo(es->str, ", JIT-Deform-Inner: %s", expr->inner_funcname);
+ appendStringInfo(es->str, ", JIT-Deform-Inner: %s",
+ jit_funcname_for_display(expr->inner_funcname));
else if (expr->flags & EEO_FLAG_JIT_EXPR &&
expr->flags & (EEO_FLAG_DEFORM_INNER))
appendStringInfo(es->str, ", JIT-Deform-Inner: false");
@@ -2270,26 +2311,34 @@ show_jit_expr_details(ExprState *expr, ExplainState *es)
else
{
if (expr->flags & EEO_FLAG_JIT_EXPR)
- ExplainPropertyText("JIT-Expr", expr->expr_funcname, es);
+ ExplainPropertyText("JIT-Expr",
+ jit_funcname_for_display(expr->expr_funcname),
+ es);
else
ExplainPropertyBool("JIT-Expr", false, es);
if (expr->scan_funcname)
- ExplainProperty("JIT-Deform-Scan", NULL, expr->scan_funcname, false, es);
+ ExplainProperty("JIT-Deform-Scan", NULL,
+ jit_funcname_for_display(expr->scan_funcname),
+ false, es);
else if (expr->flags & EEO_FLAG_DEFORM_SCAN)
ExplainProperty("JIT-Deform-Scan", NULL, "false", true, es);
else
ExplainProperty("JIT-Deform-Scan", NULL, "null", true, es);
if (expr->outer_funcname)
- ExplainProperty("JIT-Deform-Outer", NULL, expr->outer_funcname, false, es);
+ ExplainProperty("JIT-Deform-Outer", NULL,
+ jit_funcname_for_display(expr->outer_funcname),
+ false, es);
else if (expr->flags & EEO_FLAG_DEFORM_OUTER)
ExplainProperty("JIT-Deform-Outer", NULL, "false", true, es);
else
ExplainProperty("JIT-Deform-Outer", NULL, "null", true, es);
if (expr->inner_funcname)
- ExplainProperty("JIT-Deform-Inner", NULL, expr->inner_funcname, false, es);
+ ExplainProperty("JIT-Deform-Inner", NULL,
+ jit_funcname_for_display(expr->inner_funcname),
+ false, es);
else if (expr->flags & EEO_FLAG_DEFORM_INNER)
ExplainProperty("JIT-Deform-Inner", NULL, "false", true, es);
else
diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c
index 5489e118041..177a00f3826 100644
--- a/src/backend/jit/llvm/llvmjit.c
+++ b/src/backend/jit/llvm/llvmjit.c
@@ -227,6 +227,8 @@ llvm_mutable_module(LLVMJitContext *context)
char *
llvm_expand_funcname(struct LLVMJitContext *context, const char *basename)
{
+ char *funcname;
+
Assert(context->module != NULL);
context->base.instr.created_functions++;
@@ -234,11 +236,19 @@ llvm_expand_funcname(struct LLVMJitContext *context, const char *basename)
/*
* Previously we used dots to separate, but turns out some tools, e.g.
* GDB, don't like that and truncate name.
+ *
+ * Append the backend-lifetime module count to the end, so it's easier for
+ * humans and machines to compare the generated function names across
+ * queries, the prefix will be the same from query execution to query
+ * execution.
*/
- return psprintf("%s_%zu_%d",
- basename,
- context->module_generation,
- context->counter++);
+ funcname = psprintf("%s_%zu_%d_mod_%zu",
+ basename,
+ context->base.instr.created_modules - 1,
+ context->counter++,
+ context->module_generation);
+
+ return funcname;
}
/*
diff --git a/src/include/jit/llvmjit.h b/src/include/jit/llvmjit.h
index 6178864b2e6..e45ff99194f 100644
--- a/src/include/jit/llvmjit.h
+++ b/src/include/jit/llvmjit.h
@@ -41,7 +41,10 @@ typedef struct LLVMJitContext
{
JitContext base;
- /* number of modules created */
+ /*
+ * llvm_generation when ->module was created, monotonically increasing
+ * within the lifetime of a backend.
+ */
size_t module_generation;
/* current, "open for write", module */
--
2.23.0.385.gbc12974a89
v2-0006-WIP-explain-Show-per-phase-information-about-aggr.patchtext/x-diff; charset=us-asciiDownload
From ae28e068bb5af59b2cecd29ddf4cd2cf9d87ca84 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 14:58:10 -0700
Subject: [PATCH v2 6/8] WIP: explain: Show per-phase information about
aggregates in verbose mode.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 520 +++++++++++++-----
src/backend/executor/execExpr.c | 7 +-
src/backend/executor/nodeAgg.c | 4 +-
src/include/executor/executor.h | 3 +-
src/include/executor/nodeAgg.h | 3 +
src/test/regress/expected/aggregates.out | 32 +-
src/test/regress/expected/groupingsets.out | 329 ++++++-----
src/test/regress/expected/inherit.out | 9 +-
src/test/regress/expected/join.out | 5 +-
src/test/regress/expected/limit.out | 6 +-
.../regress/expected/partition_aggregate.out | 102 ++--
src/test/regress/expected/select_distinct.out | 8 +-
src/test/regress/expected/select_parallel.out | 5 +-
src/test/regress/expected/subselect.out | 5 +-
14 files changed, 679 insertions(+), 359 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 02455865d9f..2f3bd8a459a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -84,14 +84,6 @@ static void show_sort_keys(SortState *sortstate, List *ancestors,
ExplainState *es);
static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
ExplainState *es);
-static void show_agg_keys(AggState *astate, List *ancestors,
- ExplainState *es);
-static void show_grouping_sets(PlanState *planstate, Agg *agg,
- List *ancestors, ExplainState *es);
-static void show_grouping_set_keys(PlanState *planstate,
- Agg *aggnode, Sort *sortnode,
- List *context, bool useprefix,
- List *ancestors, ExplainState *es);
static void show_group_keys(GroupState *gstate, List *ancestors,
ExplainState *es);
static void show_sort_group_keys(PlanState *planstate, const char *qlabel,
@@ -103,6 +95,7 @@ static void show_sortorder_options(StringInfo buf, Node *sortexpr,
static void show_tablesample(TableSampleClause *tsc, PlanState *planstate,
List *ancestors, ExplainState *es);
static void show_sort_info(SortState *sortstate, ExplainState *es);
+static void show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es);
static void show_hash_info(HashState *hashstate, ExplainState *es);
static void show_tidbitmap_info(BitmapHeapScanState *planstate,
ExplainState *es);
@@ -1872,12 +1865,12 @@ ExplainNode(PlanState *planstate, List *ancestors,
planstate, es);
break;
case T_Agg:
- show_agg_keys(castNode(AggState, planstate), ancestors, es);
show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
ancestors, es);
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 1,
planstate, es);
+ show_agg_info((AggState *) planstate, ancestors, es);
break;
case T_Group:
show_group_keys(castNode(GroupState, planstate), ancestors, es);
@@ -2430,138 +2423,6 @@ show_merge_append_keys(MergeAppendState *mstate, List *ancestors,
ancestors, es);
}
-/*
- * Show the grouping keys for an Agg node.
- */
-static void
-show_agg_keys(AggState *astate, List *ancestors,
- ExplainState *es)
-{
- Agg *plan = (Agg *) astate->ss.ps.plan;
-
- if (plan->numCols > 0 || plan->groupingSets)
- {
- /* The key columns refer to the tlist of the child plan */
- ancestors = lcons(astate, ancestors);
-
- if (plan->groupingSets)
- show_grouping_sets(outerPlanState(astate), plan, ancestors, es);
- else
- show_sort_group_keys(outerPlanState(astate), "Group Key",
- plan->numCols, plan->grpColIdx,
- NULL, NULL, NULL,
- ancestors, es);
-
- ancestors = list_delete_first(ancestors);
- }
-}
-
-static void
-show_grouping_sets(PlanState *planstate, Agg *agg,
- List *ancestors, ExplainState *es)
-{
- List *context;
- bool useprefix;
- ListCell *lc;
-
- /* Set up deparsing context */
- context = set_deparse_context_planstate(es->deparse_cxt,
- (Node *) planstate,
- ancestors);
- useprefix = (list_length(es->rtable) > 1 || es->verbose);
-
- ExplainOpenGroup("Grouping Sets", "Grouping Sets", false, es);
-
- show_grouping_set_keys(planstate, agg, NULL,
- context, useprefix, ancestors, es);
-
- foreach(lc, agg->chain)
- {
- Agg *aggnode = lfirst(lc);
- Sort *sortnode = (Sort *) aggnode->plan.lefttree;
-
- show_grouping_set_keys(planstate, aggnode, sortnode,
- context, useprefix, ancestors, es);
- }
-
- ExplainCloseGroup("Grouping Sets", "Grouping Sets", false, es);
-}
-
-static void
-show_grouping_set_keys(PlanState *planstate,
- Agg *aggnode, Sort *sortnode,
- List *context, bool useprefix,
- List *ancestors, ExplainState *es)
-{
- Plan *plan = planstate->plan;
- char *exprstr;
- ListCell *lc;
- List *gsets = aggnode->groupingSets;
- AttrNumber *keycols = aggnode->grpColIdx;
- const char *keyname;
- const char *keysetname;
-
- if (aggnode->aggstrategy == AGG_HASHED || aggnode->aggstrategy == AGG_MIXED)
- {
- keyname = "Hash Key";
- keysetname = "Hash Keys";
- }
- else
- {
- keyname = "Group Key";
- keysetname = "Group Keys";
- }
-
- ExplainOpenGroup("Grouping Set", NULL, true, es);
-
- if (sortnode)
- {
- show_sort_group_keys(planstate, "Sort Key",
- sortnode->numCols, sortnode->sortColIdx,
- sortnode->sortOperators, sortnode->collations,
- sortnode->nullsFirst,
- ancestors, es);
- if (es->format == EXPLAIN_FORMAT_TEXT)
- es->indent++;
- }
-
- ExplainOpenGroup(keysetname, keysetname, false, es);
-
- foreach(lc, gsets)
- {
- List *result = NIL;
- ListCell *lc2;
-
- foreach(lc2, (List *) lfirst(lc))
- {
- Index i = lfirst_int(lc2);
- AttrNumber keyresno = keycols[i];
- TargetEntry *target = get_tle_by_resno(plan->targetlist,
- keyresno);
-
- if (!target)
- elog(ERROR, "no tlist entry for key %d", keyresno);
- /* Deparse the expression, showing any top-level cast */
- exprstr = deparse_expression((Node *) target->expr, context,
- useprefix, true);
-
- result = lappend(result, exprstr);
- }
-
- if (!result && es->format == EXPLAIN_FORMAT_TEXT)
- ExplainPropertyText(keyname, "()", es);
- else
- ExplainPropertyListNested(keyname, result, es);
- }
-
- ExplainCloseGroup(keysetname, keysetname, false, es);
-
- if (sortnode && es->format == EXPLAIN_FORMAT_TEXT)
- es->indent--;
-
- ExplainCloseGroup("Grouping Set", NULL, true, es);
-}
-
/*
* Show the grouping keys for a Group node.
*/
@@ -2845,6 +2706,383 @@ show_sort_info(SortState *sortstate, ExplainState *es)
}
}
+/*
+ * Generate an expression like string describing the computations for a
+ * phase's transition / combiner function.
+ */
+static char *
+exprstr_for_agg_phase(AggState *aggstate, AggStatePerPhase perphase, List *ancestors, ExplainState *es)
+{
+ PlanState *planstate = &aggstate->ss.ps;
+ StringInfoData transbuf;
+ List *context;
+ bool useprefix;
+ bool isCombine = DO_AGGSPLIT_COMBINE(aggstate->aggsplit);
+ ListCell *lc;
+
+ initStringInfo(&transbuf);
+
+ /* Set up deparsing context */
+ context = set_deparse_context_planstate(es->deparse_cxt,
+ (Node *) planstate,
+ ancestors);
+ useprefix = list_length(es->rtable) > 1;
+
+ for (int transno = 0; transno < aggstate->numtrans; transno++)
+ {
+ AggStatePerTrans pertrans = &aggstate->pertrans[transno];
+ int count = 0;
+ bool first;
+
+ if (perphase->uses_sorting)
+ count += Max(perphase->numsets, 1);
+
+ if (perphase->uses_hashing)
+ count += aggstate->num_hashes;
+
+ if (transno != 0)
+ appendStringInfoString(&transbuf, ", ");
+
+ if (pertrans->aggref->aggfilter && !isCombine)
+ {
+ appendStringInfo(&transbuf, "FILTER (%s) && ",
+ deparse_expression((Node *) pertrans->aggref->aggfilter,
+ context, useprefix, false));
+ }
+
+ /*
+ * XXX: should we instead somehow encode this as separate elements in
+ * non-text mode?
+ */
+ /* simplify for text output */
+ if (count > 1 || es->format != EXPLAIN_FORMAT_TEXT)
+ appendStringInfo(&transbuf, "%d * ", count);
+
+ appendStringInfo(&transbuf, "%s(TRANS",
+ get_func_name(pertrans->transfn_oid));
+
+ if (isCombine && pertrans->deserialfn_oid)
+ {
+ first = true;
+ appendStringInfo(&transbuf, ", %s(",
+ get_func_name(pertrans->deserialfn_oid));
+ }
+ else
+ first = false;
+
+ foreach(lc, pertrans->aggref->args)
+ {
+ TargetEntry *tle = lfirst(lc);
+
+ if (!first)
+ appendStringInfoString(&transbuf, ", ");
+
+ first = false;
+ appendStringInfo(&transbuf, "%s",
+ deparse_expression((Node *) tle->expr,
+ context, useprefix, false));
+ }
+
+ if (isCombine && pertrans->deserialfn_oid)
+ appendStringInfoString(&transbuf, ")");
+ appendStringInfoString(&transbuf, ")");
+ }
+
+ return transbuf.data;
+}
+
+static void
+show_agg_group_info(AggState *aggstate, AttrNumber *keycols, int length,
+ ExprState *expr, const char *label,
+ List *context, List *ancestors, ExplainState *es)
+{
+ bool useprefix = (list_length(es->rtable) > 1 || es->verbose);
+ List *result = NIL;
+
+ for (int colno = 0; colno < length; colno++)
+ {
+ char *exprstr;
+ AttrNumber keyresno = keycols[colno];
+ TargetEntry *target = get_tle_by_resno(outerPlanState(aggstate)->plan->targetlist,
+ keyresno);
+
+ if (!target)
+ elog(ERROR, "no tlist entry for key %d", keyresno);
+ /* Deparse the expression, showing any top-level cast */
+ exprstr = deparse_expression((Node *) target->expr, context,
+ useprefix, true);
+
+ result = lappend(result, exprstr);
+ }
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ ListCell *lc;
+ bool first = true;
+
+ appendStringInfoSpaces(es->str, es->indent * 2);
+
+ if (result != NIL)
+ {
+ appendStringInfo(es->str, "%s: ", label);
+
+ foreach(lc, result)
+ {
+ if (!first)
+ appendStringInfoString(es->str, ", ");
+ appendStringInfoString(es->str, (const char *) lfirst(lc));
+ first = false;
+ }
+ }
+ else
+ appendStringInfo(es->str, "%s", label);
+
+ if (expr && es->jit_details)
+ {
+ appendStringInfoString(es->str, "; ");
+ show_jit_expr_details(expr, es);
+ }
+
+ appendStringInfoChar(es->str, '\n');
+ }
+ else
+ {
+ ExplainOpenGroup("Group", NULL, true, es);
+ ExplainPropertyText("Method", label, es);
+ ExplainPropertyList("Key", result, es);
+ ExplainCloseGroup("Group", NULL, true, es);
+ }
+
+}
+
+/*
+ * Show information about Agg ndoes.
+ */
+static void
+show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es)
+{
+ Agg *plan = (Agg *) aggstate->ss.ps.plan;
+
+ if (!plan->groupingSets &&
+ (!es->verbose && !es->jit_details && es->format == EXPLAIN_FORMAT_TEXT))
+ {
+ /* The key columns refer to the tlist of the child plan */
+ ancestors = lcons(aggstate, ancestors);
+ show_sort_group_keys(outerPlanState(aggstate), "Group Key",
+ plan->numCols, plan->grpColIdx,
+ NULL, NULL, NULL,
+ ancestors, es);
+ ancestors = list_delete_first(ancestors);
+
+ return;
+ }
+
+ ExplainOpenGroup("Phases", "Phases", false, es);
+
+ for (int phaseno = aggstate->numphases - 1; phaseno >= 0; phaseno--)
+ {
+ AggStatePerPhase perphase = &aggstate->phases[phaseno];
+ Sort *sortnode = perphase->sortnode;
+ char *exprstr;
+ bool has_zero_length = false;
+ List *context;
+ List *strategy = NIL;
+ char *plain_strategy;
+
+ if (!perphase->evaltrans)
+ continue;
+
+ for (int i = 0; i < perphase->numsets; i++)
+ {
+ if (perphase->gset_lengths[i] == 0)
+ has_zero_length = true;
+ }
+
+ switch (perphase->aggstrategy)
+ {
+ case AGG_PLAIN:
+ strategy = lappend(strategy, "All");
+
+ if (aggstate->aggstrategy == AGG_MIXED && phaseno == 1)
+ strategy = lappend(strategy, "Hash");
+ plain_strategy = "All Group";
+ break;
+ case AGG_SORTED:
+ if (!perphase->sortnode)
+ {
+ strategy = lappend(strategy, "Sorted Input");
+ plain_strategy = "Sorted Input Group";
+ }
+ else
+ {
+ strategy = lappend(strategy, "Sort");
+ plain_strategy = "Sort Group";
+ }
+
+ if (has_zero_length)
+ strategy = lappend(strategy, "All");
+
+ if (aggstate->aggstrategy == AGG_MIXED && phaseno == 1)
+ strategy = lappend(strategy, "Hash");
+
+ break;
+ case AGG_HASHED:
+ strategy = lappend(strategy, "Hash");
+ plain_strategy = "Hash Group";
+ break;
+ case AGG_MIXED:
+ if (has_zero_length)
+ strategy = lappend(strategy, "All");
+ strategy = lappend(strategy, "Hash");
+ plain_strategy = "???";
+ break;
+ }
+
+ exprstr = exprstr_for_agg_phase(aggstate, perphase, ancestors, es);
+
+ ExplainOpenGroup("Phase", NULL, true, es);
+
+ /* The key columns refer to the tlist of the child plan */
+ ancestors = lcons(aggstate, ancestors);
+ context = set_deparse_context_planstate(es->deparse_cxt,
+ (Node *) outerPlanState(aggstate),
+ ancestors);
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ ListCell *lc;
+ bool first = true;
+
+ /* output phase data */
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "Phase %d using strategy \"",
+ phaseno);
+
+ foreach(lc, strategy)
+ {
+ if (!first)
+ appendStringInfoString(es->str, " & ");
+ first = false;
+ appendStringInfoString(es->str, (const char *) lfirst(lc));
+ }
+ appendStringInfoString(es->str, "\":\n");
+ es->indent++;
+ }
+ else
+ {
+ ExplainPropertyInteger("Phase-Number", NULL, phaseno, es);
+ ExplainPropertyList("Strategy", strategy, es);
+ }
+
+ if (sortnode)
+ {
+ show_sort_group_keys(outerPlanState(aggstate), "Sort Key",
+ sortnode->numCols, sortnode->sortColIdx,
+ sortnode->sortOperators, sortnode->collations,
+ sortnode->nullsFirst,
+ ancestors, es);
+ }
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ {
+ if (aggstate->numtrans > 0)
+ {
+ appendStringInfoSpaces(es->str, es->indent * 2);
+ appendStringInfo(es->str, "Transition Function: %s",
+ exprstr);
+ if (es->jit_details)
+ {
+ appendStringInfoString(es->str, "; ");
+ show_jit_expr_details(perphase->evaltrans, es);
+ }
+ appendStringInfoString(es->str, "\n");
+ }
+ }
+ else
+ {
+ if (es->jit_details)
+ {
+ ExplainOpenGroup("Transition Function", "Transition Function", true, es);
+ ExplainPropertyText("Expr", exprstr, es);
+ if (es->jit_details && aggstate->numtrans > 0)
+ show_jit_expr_details(perphase->evaltrans, es);
+ ExplainCloseGroup("Transition Function", "Transition Function", true, es);
+ }
+ else
+ ExplainPropertyText("Transition Function", exprstr, es);
+ }
+
+ ExplainOpenGroup("Groups", "Groups", false, es);
+
+ /* output data about each group */
+
+ if (perphase->uses_sorting)
+ {
+ if (perphase->numsets == 0)
+ {
+ int length = perphase->aggnode->numCols;
+ ExprState *expr = NULL;
+
+ if (length > 0)
+ expr = perphase->eqfunctions[perphase->aggnode->numCols - 1];
+
+ show_agg_group_info(aggstate, perphase->aggnode->grpColIdx,
+ length, expr, plain_strategy, context,
+ ancestors, es);
+ }
+
+ for (int sortno = 0; sortno < perphase->numsets; sortno++)
+ {
+ int length = perphase->gset_lengths[sortno];
+ ExprState *expr = NULL;
+ char *sort_strat;
+
+ if (length == 0)
+ sort_strat = "All Group";
+ else if (sortnode)
+ {
+ sort_strat = "Sorted Group";
+ expr = perphase->eqfunctions[length - 1];
+ }
+ else
+ {
+ sort_strat = "Sorted Input Group";
+ expr = perphase->eqfunctions[length - 1];
+ }
+
+ show_agg_group_info(aggstate, perphase->aggnode->grpColIdx, length,
+ expr, sort_strat, context, ancestors, es);
+ }
+ }
+
+ if (perphase->uses_hashing)
+ {
+ for (int hashno = 0; hashno < aggstate->num_hashes; hashno++)
+ {
+ AggStatePerHash perhash = &aggstate->perhash[hashno];
+
+ show_agg_group_info(aggstate, perhash->hashGrpColIdxInput,
+ perhash->numCols,
+ perhash->hashtable->tab_eq_func,
+ "Hash Group", context, ancestors, es);
+ }
+
+ ancestors = list_delete_first(ancestors);
+ }
+
+ ExplainCloseGroup("Groups", "Groups", false, es);
+
+ if (es->format == EXPLAIN_FORMAT_TEXT)
+ es->indent--;
+
+ /* TODO: should really show memory usage here */
+
+ ExplainCloseGroup("Phase", NULL, true, es);
+ }
+
+ ExplainCloseGroup("Phases", "Phases", false, es);
+}
+
/*
* Show information on hash buckets/batches.
*/
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 9005759cd06..e137be14979 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -2933,8 +2933,7 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest,
* transition for each of the concurrently computed grouping sets.
*/
ExprState *
-ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
- bool doSort, bool doHash)
+ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase)
{
ExprState *state = makeNode(ExprState);
PlanState *parent = &aggstate->ss.ps;
@@ -3160,7 +3159,7 @@ ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
* applicable.
*/
setoff = 0;
- if (doSort)
+ if (phase->uses_sorting)
{
int processGroupingSets = Max(phase->numsets, 1);
@@ -3172,7 +3171,7 @@ ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
}
}
- if (doHash)
+ if (phase->uses_hashing)
{
int numHashes = aggstate->num_hashes;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 58c376aeb74..d447009e002 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2904,8 +2904,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
else
Assert(false);
- phase->evaltrans = ExecBuildAggTrans(aggstate, phase, dosort, dohash);
+ phase->uses_hashing = dohash;
+ phase->uses_sorting = dosort;
+ phase->evaltrans = ExecBuildAggTrans(aggstate, phase);
}
return aggstate;
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 6298c7c8cad..6e2e7e14bac 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -249,8 +249,7 @@ extern ExprState *ExecInitExprWithParams(Expr *node, ParamListInfo ext_params);
extern ExprState *ExecInitQual(List *qual, PlanState *parent);
extern ExprState *ExecInitCheck(List *qual, PlanState *parent);
extern List *ExecInitExprList(List *nodes, PlanState *parent);
-extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase,
- bool doSort, bool doHash);
+extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase);
extern ExprState *ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc,
const TupleTableSlotOps *lops, const TupleTableSlotOps *rops,
int numCols,
diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h
index 68c9e5f5400..4f3e1377cdf 100644
--- a/src/include/executor/nodeAgg.h
+++ b/src/include/executor/nodeAgg.h
@@ -280,6 +280,9 @@ typedef struct AggStatePerPhaseData
Sort *sortnode; /* Sort node for input ordering for phase */
ExprState *evaltrans; /* evaluation of transition functions */
+
+ bool uses_hashing; /* phase uses hashing */
+ bool uses_sorting; /* phase uses sorting */
} AggStatePerPhaseData;
/*
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index 683bcaedf5f..b3732b68d77 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -504,8 +504,8 @@ from generate_series(1, 3) s1,
lateral (select s2, sum(s1 + s2) sm
from generate_series(1, 3) s2 group by s2) ss
order by 1, 2;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------
Sort
Output: s1.s1, s2.s2, (sum((s1.s1 + s2.s2)))
Sort Key: s1.s1, s2.s2
@@ -516,11 +516,13 @@ order by 1, 2;
Function Call: generate_series(1, 3)
-> HashAggregate
Project: s2.s2, sum((s1.s1 + s2.s2))
- Group Key: s2.s2
+ Phase 0 using strategy "Hash":
+ Transition Function: int4_sum(TRANS, (s1.s1 + s2.s2))
+ Hash Group: s2.s2
-> Function Scan on pg_catalog.generate_series s2
Output: s2.s2
Function Call: generate_series(1, 3)
-(14 rows)
+(16 rows)
select s1, s2, sm
from generate_series(1, 3) s1,
@@ -544,8 +546,8 @@ explain (verbose, costs off)
select array(select sum(x+y) s
from generate_series(1,3) y group by y order by s)
from generate_series(1,3) x;
- QUERY PLAN
--------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
Function Scan on pg_catalog.generate_series x
Project: (SubPlan 1)
Function Call: generate_series(1, 3)
@@ -555,11 +557,13 @@ select array(select sum(x+y) s
Sort Key: (sum((x.x + y.y)))
-> HashAggregate
Project: sum((x.x + y.y)), y.y
- Group Key: y.y
+ Phase 0 using strategy "Hash":
+ Transition Function: int4_sum(TRANS, (x.x + y.y))
+ Hash Group: y.y
-> Function Scan on pg_catalog.generate_series y
Output: y.y
Function Call: generate_series(1, 3)
-(13 rows)
+(15 rows)
select array(select sum(x+y) s
from generate_series(1,3) y group by y order by s)
@@ -2250,18 +2254,24 @@ SET enable_indexonlyscan = off;
-- regr_count(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg
EXPLAIN (COSTS OFF, VERBOSE)
SELECT variance(unique1::int4), sum(unique1::int8), regr_count(unique1::float8, unique1::float8) FROM tenk1;
- QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate
Project: variance(unique1), sum((unique1)::bigint), regr_count((unique1)::double precision, (unique1)::double precision)
+ Phase 1 using strategy "All":
+ Transition Function: numeric_poly_combine(TRANS, numeric_poly_deserialize((PARTIAL variance(unique1)))), int8_avg_combine(TRANS, int8_avg_deserialize((PARTIAL sum((unique1)::bigint)))), int8pl(TRANS, (PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)))
+ All Group
-> Gather
Output: (PARTIAL variance(unique1)), (PARTIAL sum((unique1)::bigint)), (PARTIAL regr_count((unique1)::double precision, (unique1)::double precision))
Workers Planned: 4
-> Partial Aggregate
Project: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint), PARTIAL regr_count((unique1)::double precision, (unique1)::double precision)
+ Phase 1 using strategy "All":
+ Transition Function: int4_accum(TRANS, unique1), int8_avg_accum(TRANS, (unique1)::bigint), int8inc_float8_float8(TRANS, (unique1)::double precision, (unique1)::double precision)
+ All Group
-> Parallel Seq Scan on public.tenk1
Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
-(9 rows)
+(15 rows)
SELECT variance(unique1::int4), sum(unique1::int8), regr_count(unique1::float8, unique1::float8) FROM tenk1;
variance | sum | regr_count
diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out
index c1f802c88a7..7bb052c568b 100644
--- a/src/test/regress/expected/groupingsets.out
+++ b/src/test/regress/expected/groupingsets.out
@@ -369,12 +369,13 @@ select g as alias1, g as alias2
QUERY PLAN
------------------------------------------------
GroupAggregate
- Group Key: g, g
- Group Key: g
+ Phase 1 using strategy "Sorted Input":
+ Sorted Input Group: g, g
+ Sorted Input Group: g
-> Sort
Sort Key: g
-> Function Scan on generate_series g
-(6 rows)
+(7 rows)
select g as alias1, g as alias2
from generate_series(1,3) g
@@ -640,15 +641,16 @@ select a, b, sum(v.x)
-- Test reordering of grouping sets
explain (costs off)
select * from gstest1 group by grouping sets((a,b,v),(v)) order by v,b,a;
- QUERY PLAN
-------------------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------------------
GroupAggregate
- Group Key: "*VALUES*".column3, "*VALUES*".column2, "*VALUES*".column1
- Group Key: "*VALUES*".column3
+ Phase 1 using strategy "Sorted Input":
+ Sorted Input Group: "*VALUES*".column3, "*VALUES*".column2, "*VALUES*".column1
+ Sorted Input Group: "*VALUES*".column3
-> Sort
Sort Key: "*VALUES*".column3, "*VALUES*".column2, "*VALUES*".column1
-> Values Scan on "*VALUES*"
-(6 rows)
+(7 rows)
-- Agg level check. This query should error out.
select (select grouping(a,b) from gstest2) from gstest2 group by a,b;
@@ -720,16 +722,18 @@ select a,count(*) from gstest2 group by rollup(a) having a is distinct from 1 or
explain (costs off)
select a,count(*) from gstest2 group by rollup(a) having a is distinct from 1 order by a;
- QUERY PLAN
-----------------------------------
+ QUERY PLAN
+------------------------------------------------
GroupAggregate
- Group Key: a
- Group Key: ()
Filter: (a IS DISTINCT FROM 1)
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 2 * int8inc(TRANS)
+ Sorted Input Group: a
+ All Group
-> Sort
Sort Key: a
-> Seq Scan on gstest2
-(7 rows)
+(9 rows)
select v.c, (select count(*) from gstest2 group by () having v.c)
from (values (false),(true)) v(c) order by v.c;
@@ -749,12 +753,14 @@ explain (costs off)
-> Values Scan on "*VALUES*"
SubPlan 1
-> Aggregate
- Group Key: ()
Filter: "*VALUES*".column1
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS)
+ All Group
-> Result
One-Time Filter: "*VALUES*".column1
-> Seq Scan on gstest2
-(10 rows)
+(12 rows)
-- HAVING with GROUPING queries
select ten, grouping(ten) from onek
@@ -968,15 +974,17 @@ select a, b, grouping(a,b), sum(v), count(*), max(v)
explain (costs off) select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by grouping sets ((a),(b)) order by 3,1,2;
- QUERY PLAN
---------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------
Sort
Sort Key: (GROUPING("*VALUES*".column1, "*VALUES*".column2)), "*VALUES*".column1, "*VALUES*".column2
-> HashAggregate
- Hash Key: "*VALUES*".column1
- Hash Key: "*VALUES*".column2
+ Phase 0 using strategy "Hash":
+ Transition Function: 2 * int4_sum(TRANS, "*VALUES*".column3), 2 * int8inc(TRANS), 2 * int4larger(TRANS, "*VALUES*".column3)
+ Hash Group: "*VALUES*".column1
+ Hash Group: "*VALUES*".column2
-> Values Scan on "*VALUES*"
-(6 rows)
+(8 rows)
select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by cube(a,b) order by 3,1,2;
@@ -1002,34 +1010,40 @@ select a, b, grouping(a,b), sum(v), count(*), max(v)
explain (costs off) select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by cube(a,b) order by 3,1,2;
- QUERY PLAN
---------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------
Sort
Sort Key: (GROUPING("*VALUES*".column1, "*VALUES*".column2)), "*VALUES*".column1, "*VALUES*".column2
-> MixedAggregate
- Hash Key: "*VALUES*".column1, "*VALUES*".column2
- Hash Key: "*VALUES*".column1
- Hash Key: "*VALUES*".column2
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, "*VALUES*".column3), 4 * int8inc(TRANS), 4 * int4larger(TRANS, "*VALUES*".column3)
+ All Group
+ Hash Group: "*VALUES*".column1, "*VALUES*".column2
+ Hash Group: "*VALUES*".column1
+ Hash Group: "*VALUES*".column2
-> Values Scan on "*VALUES*"
-(8 rows)
+(10 rows)
-- shouldn't try and hash
explain (costs off)
select a, b, grouping(a,b), array_agg(v order by v)
from gstest1 group by cube(a,b);
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
GroupAggregate
- Group Key: "*VALUES*".column1, "*VALUES*".column2
- Group Key: "*VALUES*".column1
- Group Key: ()
- Sort Key: "*VALUES*".column2
- Group Key: "*VALUES*".column2
+ Phase 2 using strategy "Sort":
+ Sort Key: "*VALUES*".column2
+ Transition Function: array_agg_transfn(TRANS, "*VALUES*".column3)
+ Sorted Group: "*VALUES*".column2
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 3 * array_agg_transfn(TRANS, "*VALUES*".column3)
+ Sorted Input Group: "*VALUES*".column1, "*VALUES*".column2
+ Sorted Input Group: "*VALUES*".column1
+ All Group
-> Sort
Sort Key: "*VALUES*".column1, "*VALUES*".column2
-> Values Scan on "*VALUES*"
-(9 rows)
+(13 rows)
-- unsortable cases
select unsortable_col, count(*)
@@ -1065,17 +1079,19 @@ explain (costs off)
count(*), sum(v)
from gstest4 group by grouping sets ((unhashable_col),(unsortable_col))
order by 3,5;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Sort
Sort Key: (GROUPING(unhashable_col, unsortable_col)), (sum(v))
-> MixedAggregate
- Hash Key: unsortable_col
- Group Key: unhashable_col
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4_sum(TRANS, v)
+ Sorted Input Group: unhashable_col
+ Hash Group: unsortable_col
-> Sort
Sort Key: unhashable_col
-> Seq Scan on gstest4
-(8 rows)
+(10 rows)
select unhashable_col, unsortable_col,
grouping(unhashable_col, unsortable_col),
@@ -1108,17 +1124,19 @@ explain (costs off)
count(*), sum(v)
from gstest4 group by grouping sets ((v,unhashable_col),(v,unsortable_col))
order by 3,5;
- QUERY PLAN
-------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Sort
Sort Key: (GROUPING(unhashable_col, unsortable_col)), (sum(v))
-> MixedAggregate
- Hash Key: v, unsortable_col
- Group Key: v, unhashable_col
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4_sum(TRANS, v)
+ Sorted Input Group: v, unhashable_col
+ Hash Group: v, unsortable_col
-> Sort
Sort Key: v, unhashable_col
-> Seq Scan on gstest4
-(8 rows)
+(10 rows)
-- empty input: first is 0 rows, second 1, third 3 etc.
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
@@ -1128,13 +1146,15 @@ select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a)
explain (costs off)
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),a);
- QUERY PLAN
---------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
HashAggregate
- Hash Key: a, b
- Hash Key: a
+ Phase 0 using strategy "Hash":
+ Transition Function: 2 * int4_sum(TRANS, v), 2 * int8inc(TRANS)
+ Hash Group: a, b
+ Hash Group: a
-> Seq Scan on gstest_empty
-(4 rows)
+(6 rows)
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),());
a | b | sum | count
@@ -1152,15 +1172,17 @@ select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),()
explain (costs off)
select a, b, sum(v), count(*) from gstest_empty group by grouping sets ((a,b),(),(),());
- QUERY PLAN
---------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
MixedAggregate
- Hash Key: a, b
- Group Key: ()
- Group Key: ()
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, v), 4 * int8inc(TRANS)
+ All Group
+ All Group
+ All Group
+ Hash Group: a, b
-> Seq Scan on gstest_empty
-(6 rows)
+(8 rows)
select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
sum | count
@@ -1172,14 +1194,16 @@ select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
explain (costs off)
select sum(v), count(*) from gstest_empty group by grouping sets ((),(),());
- QUERY PLAN
---------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------
Aggregate
- Group Key: ()
- Group Key: ()
- Group Key: ()
+ Phase 1 using strategy "All":
+ Transition Function: 3 * int4_sum(TRANS, v), 3 * int8inc(TRANS)
+ All Group
+ All Group
+ All Group
-> Seq Scan on gstest_empty
-(5 rows)
+(7 rows)
-- check that functionally dependent cols are not nulled
select a, d, grouping(a,b,c)
@@ -1197,13 +1221,14 @@ explain (costs off)
select a, d, grouping(a,b,c)
from gstest3
group by grouping sets ((a,b), (a,c));
- QUERY PLAN
----------------------------
+ QUERY PLAN
+----------------------------------
HashAggregate
- Hash Key: a, b
- Hash Key: a, c
+ Phase 0 using strategy "Hash":
+ Hash Group: a, b
+ Hash Group: a, c
-> Seq Scan on gstest3
-(4 rows)
+(5 rows)
-- simple rescan tests
select a, b, sum(v.x)
@@ -1224,17 +1249,19 @@ explain (costs off)
from (values (1),(2)) v(x), gstest_data(v.x)
group by grouping sets (a,b)
order by 3, 1, 2;
- QUERY PLAN
----------------------------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------
Sort
Sort Key: (sum("*VALUES*".column1)), gstest_data.a, gstest_data.b
-> HashAggregate
- Hash Key: gstest_data.a
- Hash Key: gstest_data.b
+ Phase 0 using strategy "Hash":
+ Transition Function: 2 * int4_sum(TRANS, "*VALUES*".column1)
+ Hash Group: gstest_data.a
+ Hash Group: gstest_data.b
-> Nested Loop
-> Values Scan on "*VALUES*"
-> Function Scan on gstest_data
-(8 rows)
+(10 rows)
select *
from (values (1),(2)) v(x),
@@ -1280,16 +1307,18 @@ select a, b, grouping(a,b), sum(v), count(*), max(v)
explain (costs off)
select a, b, grouping(a,b), sum(v), count(*), max(v)
from gstest1 group by grouping sets ((a,b),(a+1,b+1),(a+2,b+2)) order by 3,6;
- QUERY PLAN
--------------------------------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------
Sort
Sort Key: (GROUPING("*VALUES*".column1, "*VALUES*".column2)), (max("*VALUES*".column3))
-> HashAggregate
- Hash Key: "*VALUES*".column1, "*VALUES*".column2
- Hash Key: ("*VALUES*".column1 + 1), ("*VALUES*".column2 + 1)
- Hash Key: ("*VALUES*".column1 + 2), ("*VALUES*".column2 + 2)
+ Phase 0 using strategy "Hash":
+ Transition Function: 3 * int4_sum(TRANS, "*VALUES*".column3), 3 * int8inc(TRANS), 3 * int4larger(TRANS, "*VALUES*".column3)
+ Hash Group: "*VALUES*".column1, "*VALUES*".column2
+ Hash Group: ("*VALUES*".column1 + 1), ("*VALUES*".column2 + 1)
+ Hash Group: ("*VALUES*".column1 + 2), ("*VALUES*".column2 + 2)
-> Values Scan on "*VALUES*"
-(7 rows)
+(9 rows)
select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
from gstest2 group by cube (a,b) order by rsum, a, b;
@@ -1308,20 +1337,22 @@ select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
explain (costs off)
select a, b, sum(c), sum(sum(c)) over (order by a,b) as rsum
from gstest2 group by cube (a,b) order by rsum, a, b;
- QUERY PLAN
----------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Sort
Sort Key: (sum((sum(c))) OVER (?)), a, b
-> WindowAgg
-> Sort
Sort Key: a, b
-> MixedAggregate
- Hash Key: a, b
- Hash Key: a
- Hash Key: b
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, c)
+ All Group
+ Hash Group: a, b
+ Hash Group: a
+ Hash Group: b
-> Seq Scan on gstest2
-(11 rows)
+(13 rows)
select a, b, sum(v.x)
from (values (1),(2)) v(x), gstest_data(v.x)
@@ -1346,19 +1377,21 @@ explain (costs off)
select a, b, sum(v.x)
from (values (1),(2)) v(x), gstest_data(v.x)
group by cube (a,b) order by a,b;
- QUERY PLAN
-------------------------------------------------
+ QUERY PLAN
+------------------------------------------------------------------------
Sort
Sort Key: gstest_data.a, gstest_data.b
-> MixedAggregate
- Hash Key: gstest_data.a, gstest_data.b
- Hash Key: gstest_data.a
- Hash Key: gstest_data.b
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 4 * int4_sum(TRANS, "*VALUES*".column1)
+ All Group
+ Hash Group: gstest_data.a, gstest_data.b
+ Hash Group: gstest_data.a
+ Hash Group: gstest_data.b
-> Nested Loop
-> Values Scan on "*VALUES*"
-> Function Scan on gstest_data
-(10 rows)
+(12 rows)
-- Verify that we correctly handle the child node returning a
-- non-minimal slot, which happens if the input is pre-sorted,
@@ -1366,19 +1399,23 @@ explain (costs off)
BEGIN;
SET LOCAL enable_hashagg = false;
EXPLAIN (COSTS OFF) SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
- QUERY PLAN
----------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------
Sort
Sort Key: a, b
-> GroupAggregate
- Group Key: a
- Group Key: ()
- Sort Key: b
- Group Key: b
+ Phase 2 using strategy "Sort":
+ Sort Key: b
+ Transition Function: int8inc(TRANS), int4larger(TRANS, a), int4larger(TRANS, b)
+ Sorted Group: b
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4larger(TRANS, a), 2 * int4larger(TRANS, b)
+ Sorted Input Group: a
+ All Group
-> Sort
Sort Key: a
-> Seq Scan on gstest3
-(10 rows)
+(14 rows)
SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
a | b | count | max | max
@@ -1392,17 +1429,21 @@ SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,(
SET LOCAL enable_seqscan = false;
EXPLAIN (COSTS OFF) SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
- QUERY PLAN
-------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------
Sort
Sort Key: a, b
-> GroupAggregate
- Group Key: a
- Group Key: ()
- Sort Key: b
- Group Key: b
+ Phase 2 using strategy "Sort":
+ Sort Key: b
+ Transition Function: int8inc(TRANS), int4larger(TRANS, a), int4larger(TRANS, b)
+ Sorted Group: b
+ Phase 1 using strategy "Sorted Input & All":
+ Transition Function: 2 * int8inc(TRANS), 2 * int4larger(TRANS, a), 2 * int4larger(TRANS, b)
+ Sorted Input Group: a
+ All Group
-> Index Scan using gstest3_pkey on gstest3
-(8 rows)
+(12 rows)
SELECT a, b, count(*), max(a), max(b) FROM gstest3 GROUP BY GROUPING SETS(a, b,()) ORDER BY a, b;
a | b | count | max | max
@@ -1549,22 +1590,28 @@ explain (costs off)
count(hundred), count(thousand), count(twothousand),
count(*)
from tenk1 group by grouping sets (unique1,twothousand,thousand,hundred,ten,four,two);
- QUERY PLAN
--------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MixedAggregate
- Hash Key: two
- Hash Key: four
- Hash Key: ten
- Hash Key: hundred
- Group Key: unique1
- Sort Key: twothousand
- Group Key: twothousand
- Sort Key: thousand
- Group Key: thousand
+ Phase 3 using strategy "Sort":
+ Sort Key: thousand
+ Transition Function: int8inc_any(TRANS, two), int8inc_any(TRANS, four), int8inc_any(TRANS, ten), int8inc_any(TRANS, hundred), int8inc_any(TRANS, thousand), int8inc_any(TRANS, twothousand), int8inc(TRANS)
+ Sorted Group: thousand
+ Phase 2 using strategy "Sort":
+ Sort Key: twothousand
+ Transition Function: int8inc_any(TRANS, two), int8inc_any(TRANS, four), int8inc_any(TRANS, ten), int8inc_any(TRANS, hundred), int8inc_any(TRANS, thousand), int8inc_any(TRANS, twothousand), int8inc(TRANS)
+ Sorted Group: twothousand
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 5 * int8inc_any(TRANS, two), 5 * int8inc_any(TRANS, four), 5 * int8inc_any(TRANS, ten), 5 * int8inc_any(TRANS, hundred), 5 * int8inc_any(TRANS, thousand), 5 * int8inc_any(TRANS, twothousand), 5 * int8inc(TRANS)
+ Sorted Input Group: unique1
+ Hash Group: two
+ Hash Group: four
+ Hash Group: ten
+ Hash Group: hundred
-> Sort
Sort Key: unique1
-> Seq Scan on tenk1
-(13 rows)
+(19 rows)
explain (costs off)
select unique1,
@@ -1572,18 +1619,20 @@ explain (costs off)
count(hundred), count(thousand), count(twothousand),
count(*)
from tenk1 group by grouping sets (unique1,hundred,ten,four,two);
- QUERY PLAN
--------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MixedAggregate
- Hash Key: two
- Hash Key: four
- Hash Key: ten
- Hash Key: hundred
- Group Key: unique1
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 5 * int8inc_any(TRANS, two), 5 * int8inc_any(TRANS, four), 5 * int8inc_any(TRANS, ten), 5 * int8inc_any(TRANS, hundred), 5 * int8inc_any(TRANS, thousand), 5 * int8inc_any(TRANS, twothousand), 5 * int8inc(TRANS)
+ Sorted Input Group: unique1
+ Hash Group: two
+ Hash Group: four
+ Hash Group: ten
+ Hash Group: hundred
-> Sort
Sort Key: unique1
-> Seq Scan on tenk1
-(9 rows)
+(11 rows)
set work_mem = '384kB';
explain (costs off)
@@ -1592,21 +1641,25 @@ explain (costs off)
count(hundred), count(thousand), count(twothousand),
count(*)
from tenk1 group by grouping sets (unique1,twothousand,thousand,hundred,ten,four,two);
- QUERY PLAN
--------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MixedAggregate
- Hash Key: two
- Hash Key: four
- Hash Key: ten
- Hash Key: hundred
- Hash Key: thousand
- Group Key: unique1
- Sort Key: twothousand
- Group Key: twothousand
+ Phase 2 using strategy "Sort":
+ Sort Key: twothousand
+ Transition Function: int8inc_any(TRANS, two), int8inc_any(TRANS, four), int8inc_any(TRANS, ten), int8inc_any(TRANS, hundred), int8inc_any(TRANS, thousand), int8inc_any(TRANS, twothousand), int8inc(TRANS)
+ Sorted Group: twothousand
+ Phase 1 using strategy "Sorted Input & Hash":
+ Transition Function: 6 * int8inc_any(TRANS, two), 6 * int8inc_any(TRANS, four), 6 * int8inc_any(TRANS, ten), 6 * int8inc_any(TRANS, hundred), 6 * int8inc_any(TRANS, thousand), 6 * int8inc_any(TRANS, twothousand), 6 * int8inc(TRANS)
+ Sorted Input Group: unique1
+ Hash Group: two
+ Hash Group: four
+ Hash Group: ten
+ Hash Group: hundred
+ Hash Group: thousand
-> Sort
Sort Key: unique1
-> Seq Scan on tenk1
-(12 rows)
+(16 rows)
-- check collation-sensitive matching between grouping expressions
-- (similar to a check for aggregates, but there are additional code
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 4b8351839a8..48d16bcee55 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1435,10 +1435,13 @@ select * from matest0 order by 1-id;
(6 rows)
explain (verbose, costs off) select min(1-id) from matest0;
- QUERY PLAN
-----------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------
Aggregate
Project: min((1 - matest0.id))
+ Phase 1 using strategy "All":
+ Transition Function: int4smaller(TRANS, (1 - matest0.id))
+ All Group
-> Append
-> Seq Scan on public.matest0
Project: matest0.id
@@ -1448,7 +1451,7 @@ explain (verbose, costs off) select min(1-id) from matest0;
Project: matest2.id
-> Seq Scan on public.matest3
Project: matest3.id
-(11 rows)
+(14 rows)
select min(1-id) from matest0;
min
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
index 7f319a79938..1ddc4423888 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -6172,7 +6172,8 @@ where exists (select 1 from tenk1 t3
Hash Cond: (t3.thousand = t1.unique1)
-> HashAggregate
Project: t3.thousand, t3.tenthous
- Group Key: t3.thousand, t3.tenthous
+ Phase 0 using strategy "Hash":
+ Hash Group: t3.thousand, t3.tenthous
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1 t3
Output: t3.thousand, t3.tenthous
-> Hash
@@ -6183,7 +6184,7 @@ where exists (select 1 from tenk1 t3
-> Index Only Scan using tenk1_hundred on public.tenk1 t2
Output: t2.hundred
Index Cond: (t2.hundred = t3.tenthous)
-(18 rows)
+(19 rows)
-- ... unless it actually is unique
create table j3 as select unique1, tenthous from onek;
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index 5b247e74b77..f9124feb866 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -489,10 +489,12 @@ select sum(tenthous) as s1, sum(tenthous) + random()*0 as s2
Output: (sum(tenthous)), (((sum(tenthous))::double precision + (random() * '0'::double precision))), thousand
-> GroupAggregate
Project: sum(tenthous), ((sum(tenthous))::double precision + (random() * '0'::double precision)), thousand
- Group Key: tenk1.thousand
+ Phase 1 using strategy "Sorted Input":
+ Transition Function: int4_sum(TRANS, tenthous)
+ Sorted Input Group: tenk1.thousand
-> Index Only Scan using tenk1_thous_tenthous on public.tenk1
Output: thousand, tenthous
-(7 rows)
+(9 rows)
select sum(tenthous) as s1, sum(tenthous) + random()*0 as s2
from tenk1 group by thousand order by thousand limit 3;
diff --git a/src/test/regress/expected/partition_aggregate.out b/src/test/regress/expected/partition_aggregate.out
index 10349ec29c4..ca2c92a406a 100644
--- a/src/test/regress/expected/partition_aggregate.out
+++ b/src/test/regress/expected/partition_aggregate.out
@@ -26,16 +26,16 @@ SELECT c, sum(a), avg(b), count(*), min(a), max(b) FROM pagg_tab GROUP BY c HAVI
Sort Key: pagg_tab_p1.c, (sum(pagg_tab_p1.a)), (avg(pagg_tab_p1.b))
-> Append
-> HashAggregate
- Group Key: pagg_tab_p1.c
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.c
-> Seq Scan on pagg_tab_p1
-> HashAggregate
- Group Key: pagg_tab_p2.c
Filter: (avg(pagg_tab_p2.d) < '15'::numeric)
+ Group Key: pagg_tab_p2.c
-> Seq Scan on pagg_tab_p2
-> HashAggregate
- Group Key: pagg_tab_p3.c
Filter: (avg(pagg_tab_p3.d) < '15'::numeric)
+ Group Key: pagg_tab_p3.c
-> Seq Scan on pagg_tab_p3
(15 rows)
@@ -58,8 +58,8 @@ SELECT a, sum(b), avg(b), count(*), min(a), max(b) FROM pagg_tab GROUP BY a HAVI
Sort
Sort Key: pagg_tab_p1.a, (sum(pagg_tab_p1.b)), (avg(pagg_tab_p1.b))
-> Finalize HashAggregate
- Group Key: pagg_tab_p1.a
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.a
-> Append
-> Partial HashAggregate
Group Key: pagg_tab_p1.a
@@ -180,20 +180,20 @@ SELECT c, sum(a), avg(b), count(*) FROM pagg_tab GROUP BY 1 HAVING avg(d) < 15 O
Sort Key: pagg_tab_p1.c, (sum(pagg_tab_p1.a)), (avg(pagg_tab_p1.b))
-> Append
-> GroupAggregate
- Group Key: pagg_tab_p1.c
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.c
-> Sort
Sort Key: pagg_tab_p1.c
-> Seq Scan on pagg_tab_p1
-> GroupAggregate
- Group Key: pagg_tab_p2.c
Filter: (avg(pagg_tab_p2.d) < '15'::numeric)
+ Group Key: pagg_tab_p2.c
-> Sort
Sort Key: pagg_tab_p2.c
-> Seq Scan on pagg_tab_p2
-> GroupAggregate
- Group Key: pagg_tab_p3.c
Filter: (avg(pagg_tab_p3.d) < '15'::numeric)
+ Group Key: pagg_tab_p3.c
-> Sort
Sort Key: pagg_tab_p3.c
-> Seq Scan on pagg_tab_p3
@@ -218,8 +218,8 @@ SELECT a, sum(b), avg(b), count(*) FROM pagg_tab GROUP BY 1 HAVING avg(d) < 15 O
Sort
Sort Key: pagg_tab_p1.a, (sum(pagg_tab_p1.b)), (avg(pagg_tab_p1.b))
-> Finalize GroupAggregate
- Group Key: pagg_tab_p1.a
Filter: (avg(pagg_tab_p1.d) < '15'::numeric)
+ Group Key: pagg_tab_p1.a
-> Merge Append
Sort Key: pagg_tab_p1.a
-> Partial GroupAggregate
@@ -335,18 +335,20 @@ RESET enable_hashagg;
-- ROLLUP, partitionwise aggregation does not apply
EXPLAIN (COSTS OFF)
SELECT c, sum(a) FROM pagg_tab GROUP BY rollup(c) ORDER BY 1, 2;
- QUERY PLAN
--------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------
Sort
Sort Key: pagg_tab_p1.c, (sum(pagg_tab_p1.a))
-> MixedAggregate
- Hash Key: pagg_tab_p1.c
- Group Key: ()
+ Phase 1 using strategy "All & Hash":
+ Transition Function: 2 * int4_sum(TRANS, pagg_tab_p1.a)
+ All Group
+ Hash Group: pagg_tab_p1.c
-> Append
-> Seq Scan on pagg_tab_p1
-> Seq Scan on pagg_tab_p2
-> Seq Scan on pagg_tab_p3
-(9 rows)
+(11 rows)
-- ORDERED SET within the aggregate.
-- Full aggregation; since all the rows that belong to the same group come
@@ -522,8 +524,8 @@ SELECT t1.y, sum(t1.x), count(*) FROM pagg_tab1 t1, pagg_tab2 t2 WHERE t1.x = t2
Sort
Sort Key: t1.y, (sum(t1.x)), (count(*))
-> Finalize GroupAggregate
- Group Key: t1.y
Filter: (avg(t1.x) > '10'::numeric)
+ Group Key: t1.y
-> Merge Append
Sort Key: t1.y
-> Partial GroupAggregate
@@ -830,8 +832,8 @@ SELECT a, sum(b), avg(c), count(*) FROM pagg_tab_m GROUP BY a HAVING avg(c) < 22
Sort
Sort Key: pagg_tab_m_p1.a, (sum(pagg_tab_m_p1.b)), (avg(pagg_tab_m_p1.c))
-> Finalize HashAggregate
- Group Key: pagg_tab_m_p1.a
Filter: (avg(pagg_tab_m_p1.c) < '22'::numeric)
+ Group Key: pagg_tab_m_p1.a
-> Append
-> Partial HashAggregate
Group Key: pagg_tab_m_p1.a
@@ -864,16 +866,16 @@ SELECT a, sum(b), avg(c), count(*) FROM pagg_tab_m GROUP BY a, (a+b)/2 HAVING su
Sort Key: pagg_tab_m_p1.a, (sum(pagg_tab_m_p1.b)), (avg(pagg_tab_m_p1.c))
-> Append
-> HashAggregate
- Group Key: pagg_tab_m_p1.a, ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2)
Filter: (sum(pagg_tab_m_p1.b) < 50)
+ Group Key: pagg_tab_m_p1.a, ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2)
-> Seq Scan on pagg_tab_m_p1
-> HashAggregate
- Group Key: pagg_tab_m_p2.a, ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2)
Filter: (sum(pagg_tab_m_p2.b) < 50)
+ Group Key: pagg_tab_m_p2.a, ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2)
-> Seq Scan on pagg_tab_m_p2
-> HashAggregate
- Group Key: pagg_tab_m_p3.a, ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2)
Filter: (sum(pagg_tab_m_p3.b) < 50)
+ Group Key: pagg_tab_m_p3.a, ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2)
-> Seq Scan on pagg_tab_m_p3
(15 rows)
@@ -897,16 +899,16 @@ SELECT a, c, sum(b), avg(c), count(*) FROM pagg_tab_m GROUP BY (a+b)/2, 2, 1 HAV
Sort Key: pagg_tab_m_p1.a, pagg_tab_m_p1.c, (sum(pagg_tab_m_p1.b))
-> Append
-> HashAggregate
- Group Key: ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2), pagg_tab_m_p1.c, pagg_tab_m_p1.a
Filter: ((sum(pagg_tab_m_p1.b) = 50) AND (avg(pagg_tab_m_p1.c) > '25'::numeric))
+ Group Key: ((pagg_tab_m_p1.a + pagg_tab_m_p1.b) / 2), pagg_tab_m_p1.c, pagg_tab_m_p1.a
-> Seq Scan on pagg_tab_m_p1
-> HashAggregate
- Group Key: ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2), pagg_tab_m_p2.c, pagg_tab_m_p2.a
Filter: ((sum(pagg_tab_m_p2.b) = 50) AND (avg(pagg_tab_m_p2.c) > '25'::numeric))
+ Group Key: ((pagg_tab_m_p2.a + pagg_tab_m_p2.b) / 2), pagg_tab_m_p2.c, pagg_tab_m_p2.a
-> Seq Scan on pagg_tab_m_p2
-> HashAggregate
- Group Key: ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2), pagg_tab_m_p3.c, pagg_tab_m_p3.a
Filter: ((sum(pagg_tab_m_p3.b) = 50) AND (avg(pagg_tab_m_p3.c) > '25'::numeric))
+ Group Key: ((pagg_tab_m_p3.a + pagg_tab_m_p3.b) / 2), pagg_tab_m_p3.c, pagg_tab_m_p3.a
-> Seq Scan on pagg_tab_m_p3
(15 rows)
@@ -951,24 +953,24 @@ SELECT a, sum(b), array_agg(distinct c), count(*) FROM pagg_tab_ml GROUP BY a HA
Workers Planned: 2
-> Parallel Append
-> GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Sort
Sort Key: pagg_tab_ml_p2_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p2_s1
-> Seq Scan on pagg_tab_ml_p2_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Sort
Sort Key: pagg_tab_ml_p3_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p3_s1
-> Seq Scan on pagg_tab_ml_p3_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Sort
Sort Key: pagg_tab_ml_p1.a
-> Seq Scan on pagg_tab_ml_p1
@@ -997,24 +999,24 @@ SELECT a, sum(b), array_agg(distinct c), count(*) FROM pagg_tab_ml GROUP BY a HA
Workers Planned: 2
-> Parallel Append
-> GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Sort
Sort Key: pagg_tab_ml_p2_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p2_s1
-> Seq Scan on pagg_tab_ml_p2_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Sort
Sort Key: pagg_tab_ml_p3_s1.a
-> Append
-> Seq Scan on pagg_tab_ml_p3_s1
-> Seq Scan on pagg_tab_ml_p3_s2
-> GroupAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Sort
Sort Key: pagg_tab_ml_p1.a
-> Seq Scan on pagg_tab_ml_p1
@@ -1031,12 +1033,12 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Append
-> HashAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Seq Scan on pagg_tab_ml_p1
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Sort
Sort Key: pagg_tab_ml_p2_s1.a
-> Append
@@ -1047,8 +1049,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Group Key: pagg_tab_ml_p2_s2.a
-> Seq Scan on pagg_tab_ml_p2_s2
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Sort
Sort Key: pagg_tab_ml_p3_s1.a
-> Append
@@ -1123,24 +1125,24 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a, b, c HAVING avg(b) > 7 O
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Append
-> HashAggregate
- Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
Filter: (avg(pagg_tab_ml_p1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
-> Seq Scan on pagg_tab_ml_p1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
Filter: (avg(pagg_tab_ml_p2_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
-> Seq Scan on pagg_tab_ml_p2_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
Filter: (avg(pagg_tab_ml_p2_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
-> Seq Scan on pagg_tab_ml_p2_s2
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
Filter: (avg(pagg_tab_ml_p3_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
-> Seq Scan on pagg_tab_ml_p3_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
Filter: (avg(pagg_tab_ml_p3_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
-> Seq Scan on pagg_tab_ml_p3_s2
(23 rows)
@@ -1175,8 +1177,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Append
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p1.a
Filter: (avg(pagg_tab_ml_p1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p1.a
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1185,8 +1187,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Group Key: pagg_tab_ml_p1.a
-> Parallel Seq Scan on pagg_tab_ml_p1
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p2_s1.a
Filter: (avg(pagg_tab_ml_p2_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1199,8 +1201,8 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a HAVING avg(b) < 3 ORDER B
Group Key: pagg_tab_ml_p2_s2.a
-> Parallel Seq Scan on pagg_tab_ml_p2_s2
-> Finalize GroupAggregate
- Group Key: pagg_tab_ml_p3_s1.a
Filter: (avg(pagg_tab_ml_p3_s1.b) < '3'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1281,24 +1283,24 @@ SELECT a, sum(b), count(*) FROM pagg_tab_ml GROUP BY a, b, c HAVING avg(b) > 7 O
Sort Key: pagg_tab_ml_p1.a, (sum(pagg_tab_ml_p1.b)), (count(*))
-> Parallel Append
-> HashAggregate
- Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
Filter: (avg(pagg_tab_ml_p1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p1.a, pagg_tab_ml_p1.b, pagg_tab_ml_p1.c
-> Seq Scan on pagg_tab_ml_p1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
Filter: (avg(pagg_tab_ml_p2_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s1.a, pagg_tab_ml_p2_s1.b, pagg_tab_ml_p2_s1.c
-> Seq Scan on pagg_tab_ml_p2_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
Filter: (avg(pagg_tab_ml_p2_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p2_s2.a, pagg_tab_ml_p2_s2.b, pagg_tab_ml_p2_s2.c
-> Seq Scan on pagg_tab_ml_p2_s2
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
Filter: (avg(pagg_tab_ml_p3_s1.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s1.a, pagg_tab_ml_p3_s1.b, pagg_tab_ml_p3_s1.c
-> Seq Scan on pagg_tab_ml_p3_s1
-> HashAggregate
- Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
Filter: (avg(pagg_tab_ml_p3_s2.b) > '7'::numeric)
+ Group Key: pagg_tab_ml_p3_s2.a, pagg_tab_ml_p3_s2.b, pagg_tab_ml_p3_s2.c
-> Seq Scan on pagg_tab_ml_p3_s2
(25 rows)
@@ -1342,8 +1344,8 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1379,8 +1381,8 @@ SELECT y, sum(x), avg(x), count(*) FROM pagg_tab_para GROUP BY y HAVING avg(x) <
Sort
Sort Key: pagg_tab_para_p1.y, (sum(pagg_tab_para_p1.x)), (avg(pagg_tab_para_p1.x))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.y
Filter: (avg(pagg_tab_para_p1.x) < '12'::numeric)
+ Group Key: pagg_tab_para_p1.y
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1417,8 +1419,8 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1451,8 +1453,8 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Finalize GroupAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Gather Merge
Workers Planned: 2
-> Sort
@@ -1487,16 +1489,16 @@ SELECT x, sum(y), avg(y), count(*) FROM pagg_tab_para GROUP BY x HAVING avg(y) <
Sort Key: pagg_tab_para_p1.x, (sum(pagg_tab_para_p1.y)), (avg(pagg_tab_para_p1.y))
-> Append
-> HashAggregate
- Group Key: pagg_tab_para_p1.x
Filter: (avg(pagg_tab_para_p1.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p1.x
-> Seq Scan on pagg_tab_para_p1
-> HashAggregate
- Group Key: pagg_tab_para_p2.x
Filter: (avg(pagg_tab_para_p2.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p2.x
-> Seq Scan on pagg_tab_para_p2
-> HashAggregate
- Group Key: pagg_tab_para_p3.x
Filter: (avg(pagg_tab_para_p3.y) < '7'::numeric)
+ Group Key: pagg_tab_para_p3.x
-> Seq Scan on pagg_tab_para_p3
(15 rows)
diff --git a/src/test/regress/expected/select_distinct.out b/src/test/regress/expected/select_distinct.out
index fc93b33ee2b..e8e14292452 100644
--- a/src/test/regress/expected/select_distinct.out
+++ b/src/test/regress/expected/select_distinct.out
@@ -134,12 +134,16 @@ SELECT count(*) FROM
---------------------------------------------------------
Aggregate
Project: count(*)
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS)
+ All Group
-> HashAggregate
Project: tenk1.two, tenk1.four, tenk1.two
- Group Key: tenk1.two, tenk1.four, tenk1.two
+ Phase 0 using strategy "Hash":
+ Hash Group: tenk1.two, tenk1.four, tenk1.two
-> Seq Scan on public.tenk1
Project: tenk1.two, tenk1.four, tenk1.two
-(7 rows)
+(11 rows)
SELECT count(*) FROM
(SELECT DISTINCT two, four, two FROM tenk1) ss;
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 3c03b171707..f561e41f6f3 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -966,6 +966,9 @@ explain (costs off, verbose)
----------------------------------------------------------------------------------------------
Aggregate
Project: count(*)
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS)
+ All Group
-> Hash Semi Join
Hash Cond: ((a.unique1 = b.unique1) AND (a.two = (row_number() OVER (?))))
-> Gather
@@ -982,7 +985,7 @@ explain (costs off, verbose)
Workers Planned: 4
-> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 b
Output: b.unique1
-(18 rows)
+(21 rows)
-- LIMIT/OFFSET within sub-selects can't be pushed to workers.
explain (costs off)
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index 90fe9fe9802..a51086a0254 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -979,10 +979,11 @@ select * from int4_tbl o where (f1, f1) in
Output: generate_series(1, 50), i.f1
-> HashAggregate
Project: i.f1
- Group Key: i.f1
+ Phase 0 using strategy "Hash":
+ Hash Group: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
-(19 rows)
+(20 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,50) / 10 g from int4_tbl i group by f1);
--
2.23.0.385.gbc12974a89
v2-0007-WIP-explain-Output-hash-keys-in-verbose-mode.patchtext/x-diff; charset=us-asciiDownload
From dc26a1027f61cf9b45a2c3129fab222d96c83eda Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 26 Sep 2019 15:10:58 -0700
Subject: [PATCH v2 7/8] WIP: explain: Output hash keys in verbose mode.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/backend/commands/explain.c | 29 ++++++-
src/test/regress/expected/join.out | 82 +++++++++++++++----
src/test/regress/expected/join_hash.out | 12 ++-
src/test/regress/expected/plpgsql.out | 2 +
src/test/regress/expected/select_parallel.out | 6 +-
5 files changed, 109 insertions(+), 22 deletions(-)
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 2f3bd8a459a..1f613d31376 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -96,7 +96,7 @@ static void show_tablesample(TableSampleClause *tsc, PlanState *planstate,
List *ancestors, ExplainState *es);
static void show_sort_info(SortState *sortstate, ExplainState *es);
static void show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es);
-static void show_hash_info(HashState *hashstate, ExplainState *es);
+static void show_hash_info(HashState *hashstate, List *ancestors, ExplainState *es);
static void show_tidbitmap_info(BitmapHeapScanState *planstate,
ExplainState *es);
static void show_instrumentation_count(const char *qlabel, int which,
@@ -1863,6 +1863,17 @@ ExplainNode(PlanState *planstate, List *ancestors,
if (plan->qual)
show_instrumentation_count("Rows Removed by Filter", 2,
planstate, es);
+ if (es->verbose)
+ {
+ ListCell *lc1, *lc2;
+
+ forboth(lc1, ((HashJoin *) plan)->hashkeys,
+ lc2, ((HashJoinState *) planstate)->hj_OuterHashKeys)
+ {
+ show_expression(lfirst(lc1), lfirst(lc2), "Outer Hash Key",
+ planstate, ancestors, true, es);
+ }
+ }
break;
case T_Agg:
show_upper_qual(plan->qual, planstate->qual, "Filter", planstate,
@@ -1903,7 +1914,7 @@ ExplainNode(PlanState *planstate, List *ancestors,
es);
break;
case T_Hash:
- show_hash_info(castNode(HashState, planstate), es);
+ show_hash_info(castNode(HashState, planstate), ancestors, es);
break;
default:
break;
@@ -3087,7 +3098,7 @@ show_agg_info(AggState *aggstate, List *ancestors, ExplainState *es)
* Show information on hash buckets/batches.
*/
static void
-show_hash_info(HashState *hashstate, ExplainState *es)
+show_hash_info(HashState *hashstate, List *ancestors, ExplainState *es)
{
HashInstrumentation hinstrument = {0};
@@ -3184,6 +3195,18 @@ show_hash_info(HashState *hashstate, ExplainState *es)
spacePeakKb);
}
}
+
+ if (es->verbose)
+ {
+ ListCell *lc1, *lc2;
+
+ forboth(lc1, ((Hash *) hashstate->ps.plan)->hashkeys,
+ lc2, hashstate->hashkeys)
+ {
+ show_expression(lfirst(lc1), (ExprState *) lfirst(lc2), "Hash Key",
+ &hashstate->ps, ancestors, true, es);
+ }
+ }
}
/*
diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out
index 1ddc4423888..2ba48596622 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -3792,6 +3792,7 @@ select t1.* from
Hash Left Join
Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
+ Outer Hash Key: i8.q2
-> Nested Loop Left Join
Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
@@ -3802,24 +3803,29 @@ select t1.* from
-> Hash Right Join
Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
+ Outer Hash Key: (NULL::integer)
-> Hash Join
Project: i8.q2, (NULL::integer)
Hash Cond: (i8.q1 = i8b2.q1)
+ Outer Hash Key: i8.q1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b2.q1, (NULL::integer)
+ Hash Key: i8b2.q1
-> Seq Scan on public.int8_tbl i8b2
Project: i8b2.q1, NULL::integer
-> Hash
Output: i8b1.q2
+ Hash Key: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
Project: i8b1.q2
-> Hash
Output: i4.f1
+ Hash Key: i4.f1
-> Seq Scan on public.int4_tbl i4
Output: i4.f1
-(30 rows)
+(36 rows)
select t1.* from
text_tbl t1
@@ -3853,6 +3859,7 @@ select t1.* from
Hash Left Join
Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
+ Outer Hash Key: i8.q2
-> Nested Loop Left Join
Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
@@ -3863,9 +3870,11 @@ select t1.* from
-> Hash Right Join
Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
+ Outer Hash Key: (NULL::integer)
-> Hash Right Join
Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
+ Outer Hash Key: i8b2.q1
-> Nested Loop
Project: i8b2.q1, NULL::integer
-> Seq Scan on public.int8_tbl i8b2
@@ -3874,17 +3883,20 @@ select t1.* from
-> Seq Scan on public.int4_tbl i4b2
-> Hash
Output: i8.q1, i8.q2
+ Hash Key: i8.q1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b1.q2
+ Hash Key: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
Project: i8b1.q2
-> Hash
Output: i4.f1
+ Hash Key: i4.f1
-> Seq Scan on public.int4_tbl i4
Output: i4.f1
-(34 rows)
+(40 rows)
select t1.* from
text_tbl t1
@@ -3919,6 +3931,7 @@ select t1.* from
Hash Left Join
Project: t1.f1
Hash Cond: (i8.q2 = i4.f1)
+ Outer Hash Key: i8.q2
-> Nested Loop Left Join
Project: t1.f1, i8.q2
Join Filter: (t1.f1 = '***'::text)
@@ -3929,31 +3942,38 @@ select t1.* from
-> Hash Right Join
Project: i8.q2
Hash Cond: ((NULL::integer) = i8b1.q2)
+ Outer Hash Key: (NULL::integer)
-> Hash Right Join
Project: i8.q2, (NULL::integer)
Hash Cond: (i8b2.q1 = i8.q1)
+ Outer Hash Key: i8b2.q1
-> Hash Join
Project: i8b2.q1, NULL::integer
Hash Cond: (i8b2.q1 = i4b2.f1)
+ Outer Hash Key: i8b2.q1
-> Seq Scan on public.int8_tbl i8b2
Output: i8b2.q1, i8b2.q2
-> Hash
Output: i4b2.f1
+ Hash Key: i4b2.f1
-> Seq Scan on public.int4_tbl i4b2
Output: i4b2.f1
-> Hash
Output: i8.q1, i8.q2
+ Hash Key: i8.q1
-> Seq Scan on public.int8_tbl i8
Output: i8.q1, i8.q2
-> Hash
Output: i8b1.q2
+ Hash Key: i8b1.q2
-> Seq Scan on public.int8_tbl i8b1
Project: i8b1.q2
-> Hash
Output: i4.f1
+ Hash Key: i4.f1
-> Seq Scan on public.int4_tbl i4
Output: i4.f1
-(37 rows)
+(45 rows)
select t1.* from
text_tbl t1
@@ -4177,6 +4197,7 @@ where ss1.c2 = 0;
-> Hash Join
Project: i41.f1, i42.f1, i8.q1, i8.q2, i43.f1, 42
Hash Cond: (i41.f1 = i42.f1)
+ Outer Hash Key: i41.f1
-> Nested Loop
Project: i8.q1, i8.q2, i43.f1, i41.f1
-> Nested Loop
@@ -4191,13 +4212,14 @@ where ss1.c2 = 0;
Output: i41.f1
-> Hash
Output: i42.f1
+ Hash Key: i42.f1
-> Seq Scan on public.int4_tbl i42
Output: i42.f1
-> Limit
Output: (i41.f1), (i8.q1), (i8.q2), (i42.f1), (i43.f1), ((42))
-> Seq Scan on public.text_tbl
Project: i41.f1, i8.q1, i8.q2, i42.f1, i43.f1, (42)
-(25 rows)
+(27 rows)
select ss2.* from
int4_tbl i41
@@ -5259,13 +5281,15 @@ select * from int4_tbl i left join
Hash Left Join
Project: i.f1, j.f1
Hash Cond: (i.f1 = j.f1)
+ Outer Hash Key: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
-> Hash
Output: j.f1
+ Hash Key: j.f1
-> Seq Scan on public.int2_tbl j
Output: j.f1
-(9 rows)
+(11 rows)
select * from int4_tbl i left join
lateral (select * from int2_tbl j where i.f1 = j.f1) k on true;
@@ -5317,14 +5341,16 @@ select * from int4_tbl a,
-> Hash Left Join
Project: b.f1, c.q1, c.q2
Hash Cond: (b.f1 = c.q1)
+ Outer Hash Key: b.f1
-> Seq Scan on public.int4_tbl b
Output: b.f1
-> Hash
Output: c.q1, c.q2
+ Hash Key: c.q1
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
Filter: (a.f1 = c.q2)
-(14 rows)
+(16 rows)
select * from int4_tbl a,
lateral (
@@ -5449,26 +5475,30 @@ select * from
-> Hash Right Join
Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
+ Outer Hash Key: d.q1
-> Nested Loop
Project: a.q1, a.q2, b.q1, d.q1, (COALESCE(b.q2, '42'::bigint)), (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
-> Hash Left Join
Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, '42'::bigint))
Hash Cond: (a.q2 = b.q1)
+ Outer Hash Key: a.q2
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Hash
Output: b.q1, (COALESCE(b.q2, '42'::bigint))
+ Hash Key: b.q1
-> Seq Scan on public.int8_tbl b
Project: b.q1, COALESCE(b.q2, '42'::bigint)
-> Seq Scan on public.int8_tbl d
Project: d.q1, COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
+ Hash Key: c.q2
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
-> Result
Project: (COALESCE((COALESCE(b.q2, '42'::bigint)), d.q2))
-(24 rows)
+(28 rows)
-- case that breaks the old ph_may_need optimization
explain (verbose, costs off)
@@ -5490,11 +5520,13 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
-> Hash Right Join
Project: c.q1, c.q2, a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
Hash Cond: (d.q1 = c.q2)
+ Outer Hash Key: d.q1
-> Nested Loop
Project: a.q1, a.q2, b.q1, d.q1, (COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2))
-> Hash Right Join
Project: a.q1, a.q2, b.q1, (COALESCE(b.q2, (b2.f1)::bigint))
Hash Cond: (b.q1 = a.q2)
+ Outer Hash Key: b.q1
-> Nested Loop
Project: b.q1, COALESCE(b.q2, (b2.f1)::bigint)
Join Filter: (b.q1 < b2.f1)
@@ -5506,19 +5538,21 @@ select c.*,a.*,ss1.q1,ss2.q1,ss3.* from
Output: b2.f1
-> Hash
Output: a.q1, a.q2
+ Hash Key: a.q2
-> Seq Scan on public.int8_tbl a
Output: a.q1, a.q2
-> Seq Scan on public.int8_tbl d
Project: d.q1, COALESCE((COALESCE(b.q2, (b2.f1)::bigint)), d.q2)
-> Hash
Output: c.q1, c.q2
+ Hash Key: c.q2
-> Seq Scan on public.int8_tbl c
Output: c.q1, c.q2
-> Materialize
Output: i.f1
-> Seq Scan on public.int4_tbl i
Output: i.f1
-(34 rows)
+(38 rows)
-- check processing of postponed quals (bug #9041)
explain (verbose, costs off)
@@ -5791,10 +5825,12 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
-> Hash Join
Project: t2.a, LEAST(t1.a, t2.a, t3.a)
Hash Cond: (t3.b = t2.a)
+ Outer Hash Key: t3.b
-> Seq Scan on public.join_ut1 t3
Output: t3.a, t3.b, t3.c
-> Hash
Output: t2.a
+ Hash Key: t2.a
-> Append
-> Seq Scan on public.join_pt1p1p1 t2
Project: t2.a
@@ -5802,7 +5838,7 @@ select t1.b, ss.phv from join_ut1 t1 left join lateral
-> Seq Scan on public.join_pt1p2 t2_1
Project: t2_1.a
Filter: (t1.a = t2_1.a)
-(21 rows)
+(23 rows)
select t1.b, ss.phv from join_ut1 t1 left join lateral
(select t2.a as t2a, t3.a t3a, least(t1.a, t2.a, t3.a) phv
@@ -5872,13 +5908,15 @@ select * from j1 inner join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- ensure join is not unique when not an equi-join
explain (verbose, costs off)
@@ -5905,13 +5943,15 @@ select * from j1 inner join j3 on j1.id = j3.id;
Project: j1.id, j3.id
Inner Unique: true
Hash Cond: (j3.id = j1.id)
+ Outer Hash Key: j3.id
-> Seq Scan on public.j3
Output: j3.id
-> Hash
Output: j1.id
+ Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-(10 rows)
+(12 rows)
-- ensure left join is marked as unique
explain (verbose, costs off)
@@ -5922,13 +5962,15 @@ select * from j1 left join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- ensure right join is marked as unique
explain (verbose, costs off)
@@ -5939,13 +5981,15 @@ select * from j1 right join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j2.id = j1.id)
+ Outer Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-> Hash
Output: j1.id
+ Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-(10 rows)
+(12 rows)
-- ensure full join is marked as unique
explain (verbose, costs off)
@@ -5956,13 +6000,15 @@ select * from j1 full join j2 on j1.id = j2.id;
Project: j1.id, j2.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- a clauseless (cross) join can't be unique
explain (verbose, costs off)
@@ -5988,13 +6034,15 @@ select * from j1 natural join j2;
Project: j1.id
Inner Unique: true
Hash Cond: (j1.id = j2.id)
+ Outer Hash Key: j1.id
-> Seq Scan on public.j1
Output: j1.id
-> Hash
Output: j2.id
+ Hash Key: j2.id
-> Seq Scan on public.j2
Output: j2.id
-(10 rows)
+(12 rows)
-- ensure a distinct clause allows the inner to become unique
explain (verbose, costs off)
@@ -6170,6 +6218,7 @@ where exists (select 1 from tenk1 t3
-> Hash Join
Project: t1.unique1, t3.tenthous
Hash Cond: (t3.thousand = t1.unique1)
+ Outer Hash Key: t3.thousand
-> HashAggregate
Project: t3.thousand, t3.tenthous
Phase 0 using strategy "Hash":
@@ -6178,13 +6227,14 @@ where exists (select 1 from tenk1 t3
Output: t3.thousand, t3.tenthous
-> Hash
Output: t1.unique1
+ Hash Key: t1.unique1
-> Index Only Scan using onek_unique1 on public.onek t1
Output: t1.unique1
Index Cond: (t1.unique1 < 1)
-> Index Only Scan using tenk1_hundred on public.tenk1 t2
Output: t2.hundred
Index Cond: (t2.hundred = t3.tenthous)
-(19 rows)
+(21 rows)
-- ... unless it actually is unique
create table j3 as select unique1, tenthous from onek;
diff --git a/src/test/regress/expected/join_hash.out b/src/test/regress/expected/join_hash.out
index 4e405ebbd76..379b3b1566e 100644
--- a/src/test/regress/expected/join_hash.out
+++ b/src/test/regress/expected/join_hash.out
@@ -919,6 +919,8 @@ WHERE
Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: ((hjtest_1.id = (SubPlan 1)) AND ((SubPlan 2) = (SubPlan 3)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
+ Outer Hash Key: hjtest_1.id
+ Outer Hash Key: (SubPlan 2)
-> Seq Scan on public.hjtest_1
Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
@@ -927,6 +929,8 @@ WHERE
Project: (hjtest_1.b * 5)
-> Hash
Output: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
+ Hash Key: (SubPlan 1)
+ Hash Key: (SubPlan 3)
-> Seq Scan on public.hjtest_2
Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
@@ -943,7 +947,7 @@ WHERE
SubPlan 2
-> Result
Project: (hjtest_1.b * 5)
-(28 rows)
+(32 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
FROM hjtest_1, hjtest_2
@@ -973,6 +977,8 @@ WHERE
Project: hjtest_1.a, hjtest_2.a, (hjtest_1.tableoid)::regclass, (hjtest_2.tableoid)::regclass
Hash Cond: (((SubPlan 1) = hjtest_1.id) AND ((SubPlan 3) = (SubPlan 2)))
Join Filter: (hjtest_1.a <> hjtest_2.b)
+ Outer Hash Key: (SubPlan 1)
+ Outer Hash Key: (SubPlan 3)
-> Seq Scan on public.hjtest_2
Project: hjtest_2.a, hjtest_2.tableoid, hjtest_2.id, hjtest_2.c, hjtest_2.b
Filter: ((SubPlan 5) < 55)
@@ -981,6 +987,8 @@ WHERE
Project: (hjtest_2.c * 5)
-> Hash
Output: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
+ Hash Key: hjtest_1.id
+ Hash Key: (SubPlan 2)
-> Seq Scan on public.hjtest_1
Project: hjtest_1.a, hjtest_1.tableoid, hjtest_1.id, hjtest_1.b
Filter: ((SubPlan 4) < 50)
@@ -997,7 +1005,7 @@ WHERE
SubPlan 3
-> Result
Project: (hjtest_2.c * 5)
-(28 rows)
+(32 rows)
SELECT hjtest_1.a a1, hjtest_2.a a2,hjtest_1.tableoid::regclass t1, hjtest_2.tableoid::regclass t2
FROM hjtest_2, hjtest_1
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
index 92421090755..9d06f8467b2 100644
--- a/src/test/regress/expected/plpgsql.out
+++ b/src/test/regress/expected/plpgsql.out
@@ -5209,10 +5209,12 @@ UPDATE transition_table_base
INFO: Hash Full Join
Project: COALESCE(ot.id, nt.id), ot.val, nt.val
Hash Cond: (ot.id = nt.id)
+ Outer Hash Key: ot.id
-> Named Tuplestore Scan
Output: ot.id, ot.val
-> Hash
Output: nt.id, nt.val
+ Hash Key: nt.id
-> Named Tuplestore Scan
Output: nt.id, nt.val
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index f561e41f6f3..76be808c127 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -971,6 +971,8 @@ explain (costs off, verbose)
All Group
-> Hash Semi Join
Hash Cond: ((a.unique1 = b.unique1) AND (a.two = (row_number() OVER (?))))
+ Outer Hash Key: a.unique1
+ Outer Hash Key: a.two
-> Gather
Output: a.unique1, a.two
Workers Planned: 4
@@ -978,6 +980,8 @@ explain (costs off, verbose)
Project: a.unique1, a.two
-> Hash
Output: b.unique1, (row_number() OVER (?))
+ Hash Key: b.unique1
+ Hash Key: (row_number() OVER (?))
-> WindowAgg
Project: b.unique1, row_number() OVER (?)
-> Gather
@@ -985,7 +989,7 @@ explain (costs off, verbose)
Workers Planned: 4
-> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 b
Output: b.unique1
-(21 rows)
+(25 rows)
-- LIMIT/OFFSET within sub-selects can't be pushed to workers.
explain (costs off)
--
2.23.0.385.gbc12974a89
v2-0008-jit-Add-tests.patchtext/x-diff; charset=us-asciiDownload
From 6887712ec32d01984135056ec5828ef0a442e552 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 28 Oct 2019 17:01:42 -0700
Subject: [PATCH v2 8/8] jit: Add tests.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/test/regress/expected/jit.out | 491 ++++++++++++++++++++++++++++
src/test/regress/expected/jit_0.out | 5 +
src/test/regress/parallel_schedule | 2 +-
src/test/regress/sql/jit.sql | 166 ++++++++++
4 files changed, 663 insertions(+), 1 deletion(-)
create mode 100644 src/test/regress/expected/jit.out
create mode 100644 src/test/regress/expected/jit_0.out
create mode 100644 src/test/regress/sql/jit.sql
diff --git a/src/test/regress/expected/jit.out b/src/test/regress/expected/jit.out
new file mode 100644
index 00000000000..b1981df9fed
--- /dev/null
+++ b/src/test/regress/expected/jit.out
@@ -0,0 +1,491 @@
+/* skip test if JIT is not available */
+SELECT NOT (pg_jit_available() AND current_setting('jit')::bool)
+ AS skip_test \gset
+\if :skip_test
+\quit
+\endif
+-- start with a known baseline
+set jit_expressions = true;
+set jit_tuple_deforming = true;
+-- to reliably test, despite costs varying between platforms
+set jit_above_cost = 0;
+-- to make the bulk of the test cheaper
+set jit_optimize_above_cost = -1;
+set jit_inline_above_cost = -1;
+CREATE TABLE jittest_simple(id serial primary key, data text);
+INSERT INTO jittest_simple(data) VALUES('row1');
+INSERT INTO jittest_simple(data) VALUES('row2');
+-- verify that a simple relation-less query can be JITed
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+ QUERY PLAN
+---------------------------------------------------------------
+ Result
+ Project: (txid_current() = txid_current()); JIT-Expr: false
+(2 rows)
+
+SELECT txid_current() = txid_current();
+ ?column?
+----------
+ t
+(1 row)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Result
+ Project: (txid_current() = txid_current()); JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SELECT txid_current() = txid_current();
+ ?column?
+----------
+ t
+(1 row)
+
+-- that tuple deforming for a plain seqscan is JITed when projecting
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+ QUERY PLAN
+-----------------------------------
+ Seq Scan on public.jittest_simple
+ Project: data; JIT-Expr: false
+(2 rows)
+
+SELECT data FROM jittest_simple;
+ data
+------
+ row1
+ row2
+(2 rows)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: data; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 2 (1 for expression evaluation, 1 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SELECT data FROM jittest_simple;
+ data
+------
+ row1
+ row2
+(2 rows)
+
+-- unfortunately currently the physical tlist optimization may prevent
+-- JITed tuple deforming from taking effect
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+ QUERY PLAN
+-----------------------------------
+ Seq Scan on public.jittest_simple
+ Output: id, data
+(2 rows)
+
+SELECT * FROM jittest_simple;
+ id | data
+----+------
+ 1 | row1
+ 2 | row2
+(2 rows)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+ QUERY PLAN
+-----------------------------------
+ Seq Scan on public.jittest_simple
+ Output: id, data
+(2 rows)
+
+SELECT * FROM jittest_simple;
+ id | data
+----+------
+ 1 | row1
+ 2 | row2
+(2 rows)
+
+-- check that tuple deforming on wide tables works
+BEGIN;
+SET LOCAL jit_tuple_deforming = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+ QUERY PLAN
+----------------------------------------------------------------------------------
+ Seq Scan on public.extra_wide_table
+ Project: firstc, lastc; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: false
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming false
+(5 rows)
+
+SELECT firstc, lastc FROM extra_wide_table;
+ firstc | lastc
+-----------+----------
+ first col | last col
+(1 row)
+
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.extra_wide_table
+ Project: firstc, lastc; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 2 (1 for expression evaluation, 1 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SELECT firstc, lastc FROM extra_wide_table;
+ firstc | lastc
+-----------+----------
+ first col | last col
+(1 row)
+
+-----
+-- test costing
+-----
+-- don't perform JIT compilation unless worthwhile
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: false
+(2 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- optimize once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization true, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- behave sanely if optimization cost is below general JIT costs
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 0;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: false
+(2 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization true, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining true, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+--------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining true, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- perform inlining and optimization once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+-------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining true, Optimization true, Expressions true, Deforming true
+(5 rows)
+
+COMMIT;
+-- check that inner/outer tuple deforming can be inferred for upper nodes, join case
+BEGIN;
+SET LOCAL enable_hashjoin = true;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
+ Hash Join
+ Project: (a.data || b.data); JIT-Expr: evalexpr_0_3, JIT-Deform-Outer: deform_0_5, JIT-Deform-Inner: deform_0_4
+ Inner Unique: true
+ Hash Cond: (a.id = b.id); JIT-Expr: evalexpr_0_6, JIT-Deform-Outer: deform_0_8, JIT-Deform-Inner: deform_0_7
+ Outer Hash Key: a.id; JIT-Expr: evalexpr_0_9, JIT-Deform-Outer: deform_0_10
+ -> Seq Scan on public.jittest_simple a
+ Output: a.id, a.data
+ -> Hash
+ Output: b.data, b.id
+ Hash Key: b.id; JIT-Expr: evalexpr_0_2
+ -> Seq Scan on public.jittest_simple b
+ Project: b.data, b.id; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 11 (5 for expression evaluation, 6 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(15 rows)
+
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ ?column?
+----------
+ row1row1
+ row2row2
+(2 rows)
+
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = true;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
+ Merge Join
+ Project: (a.data || b.data); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_2, JIT-Deform-Inner: deform_0_1
+ Inner Unique: true
+ Merge Cond: (a.id = b.id)
+ -> Index Scan using jittest_simple_pkey on public.jittest_simple a
+ Output: a.id, a.data
+ -> Index Scan using jittest_simple_pkey on public.jittest_simple b
+ Output: b.id, b.data
+ JIT:
+ Functions: 7 (3 for expression evaluation, 4 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(11 rows)
+
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ ?column?
+----------
+ row1row1
+ row2row2
+(2 rows)
+
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
+ Nested Loop
+ Project: (a.data || b.data); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_4, JIT-Deform-Inner: deform_0_3
+ Inner Unique: true
+ -> Seq Scan on public.jittest_simple a
+ Output: a.id, a.data
+ -> Index Scan using jittest_simple_pkey on public.jittest_simple b
+ Output: b.id, b.data
+ Index Cond: (b.id = a.id); JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 5 (2 for expression evaluation, 3 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(11 rows)
+
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+ ?column?
+----------
+ row1row1
+ row2row2
+(2 rows)
+
+COMMIT;
+-- check that inner/outer tuple deforming can be inferred for upper nodes, agg case
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Aggregate
+ Project: count(*), count(data), string_agg(data, ':'::text); JIT-Expr: evalexpr_0_0
+ Phase 1 using strategy "All":
+ Transition Function: int8inc(TRANS), int8inc_any(TRANS, data), string_agg_transfn(TRANS, data, ':'::text); JIT-Expr: evalexpr_0_1, JIT-Deform-Outer: deform_0_2
+ All Group
+ -> Seq Scan on public.jittest_simple
+ Output: id, data
+ JIT:
+ Functions: 3 (2 for expression evaluation, 1 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(10 rows)
+
+SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+ count | count | string_agg
+-------+-------+------------
+ 2 | 2 | row1:row2
+(1 row)
+
+-- Check that the equality hash-table function in a hash-aggregate can
+-- be accelerated.
+--
+-- XXX: Unfortunately this is currently broken
+BEGIN;
+SET LOCAL enable_hashagg = true;
+SET LOCAL enable_sort = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------------------------------
+ HashAggregate
+ Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_1
+ Phase 0 using strategy "Hash":
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_5, JIT-Deform-Outer: deform_0_6
+ Hash Group: jittest_simple.data; JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_4, JIT-Deform-Inner: deform_0_3
+ -> Seq Scan on public.jittest_simple
+ Output: id, data
+ JIT:
+ Functions: 7 (3 for expression evaluation, 4 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(10 rows)
+
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ data | string_agg
+------+------------
+ row1 | 1
+ row2 | 2
+(2 rows)
+
+END;
+-- Unfortunately for sort based aggregates, the group comparison
+-- function can current not be JITed
+BEGIN;
+SET LOCAL enable_hashagg = false;
+SET LOCAL enable_sort = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ QUERY PLAN
+----------------------------------------------------------------------------------------------------------------------------------
+ GroupAggregate
+ Project: data, string_agg((id)::text, ', '::text); JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_3
+ Phase 1 using strategy "Sorted Input":
+ Transition Function: string_agg_transfn(TRANS, (id)::text, ', '::text); JIT-Expr: evalexpr_0_5, JIT-Deform-Outer: deform_0_6
+ Sorted Input Group: jittest_simple.data; JIT-Expr: evalexpr_0_4, JIT-Deform-Outer: false, JIT-Deform-Inner: false
+ -> Sort
+ Output: data, id
+ Sort Key: jittest_simple.data
+ -> Seq Scan on public.jittest_simple
+ Project: data, id; JIT-Expr: evalexpr_0_0, JIT-Deform-Scan: deform_0_1
+ JIT:
+ Functions: 7 (4 for expression evaluation, 3 for tuple deforming)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(13 rows)
+
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+ data | string_agg
+------+------------
+ row1 | 1
+ row2 | 2
+(2 rows)
+
+END;
+-- check that EXPLAIN ANALYZE output is reproducible with the right options
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS, ANALYZE, TIMING OFF, SUMMARY OFF) SELECT tableoid FROM jittest_simple;
+ QUERY PLAN
+---------------------------------------------------------------------------------
+ Seq Scan on public.jittest_simple (actual rows=2 loops=1)
+ Project: tableoid; JIT-Expr: evalexpr_0_0
+ JIT:
+ Functions: 1 (1 for expression evaluation)
+ Options: Inlining false, Optimization false, Expressions true, Deforming true
+(5 rows)
+
+DROP TABLE jittest_simple;
diff --git a/src/test/regress/expected/jit_0.out b/src/test/regress/expected/jit_0.out
new file mode 100644
index 00000000000..9812cb33752
--- /dev/null
+++ b/src/test/regress/expected/jit_0.out
@@ -0,0 +1,5 @@
+/* skip test if JIT is not available */
+SELECT NOT (pg_jit_available() AND current_setting('jit')::bool)
+ AS skip_test \gset
+\if :skip_test
+\quit
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index fc0f14122bb..c1c3dd3af8b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -78,7 +78,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview
# ----------
# Another group of parallel tests
# ----------
-test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8
+test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8 jit
# rules cannot run concurrently with any test that creates
# a view or rule in the public schema
diff --git a/src/test/regress/sql/jit.sql b/src/test/regress/sql/jit.sql
new file mode 100644
index 00000000000..eb617c0ca58
--- /dev/null
+++ b/src/test/regress/sql/jit.sql
@@ -0,0 +1,166 @@
+/* skip test if JIT is not available */
+SELECT NOT (pg_jit_available() AND current_setting('jit')::bool)
+ AS skip_test \gset
+\if :skip_test
+\quit
+\endif
+
+-- start with a known baseline
+set jit_expressions = true;
+set jit_tuple_deforming = true;
+-- to reliably test, despite costs varying between platforms
+set jit_above_cost = 0;
+-- to make the bulk of the test cheaper
+set jit_optimize_above_cost = -1;
+set jit_inline_above_cost = -1;
+
+CREATE TABLE jittest_simple(id serial primary key, data text);
+INSERT INTO jittest_simple(data) VALUES('row1');
+INSERT INTO jittest_simple(data) VALUES('row2');
+
+-- verify that a simple relation-less query can be JITed
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+SELECT txid_current() = txid_current();
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT txid_current() = txid_current();
+SELECT txid_current() = txid_current();
+
+
+-- that tuple deforming for a plain seqscan is JITed when projecting
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+SELECT data FROM jittest_simple;
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data FROM jittest_simple;
+SELECT data FROM jittest_simple;
+
+-- unfortunately currently the physical tlist optimization may prevent
+-- JITed tuple deforming from taking effect
+BEGIN;
+SET LOCAL jit = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+SELECT * FROM jittest_simple;
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT * FROM jittest_simple;
+SELECT * FROM jittest_simple;
+
+-- check that tuple deforming on wide tables works
+BEGIN;
+SET LOCAL jit_tuple_deforming = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+SELECT firstc, lastc FROM extra_wide_table;
+COMMIT;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT firstc, lastc FROM extra_wide_table;
+SELECT firstc, lastc FROM extra_wide_table;
+
+-----
+-- test costing
+-----
+
+-- don't perform JIT compilation unless worthwhile
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- optimize once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- behave sanely if optimization cost is below general JIT costs
+BEGIN;
+SET LOCAL jit_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 0;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- perform inlining once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+
+-- perform inlining and optimization once expensive enough
+BEGIN;
+SET LOCAL jit_above_cost = 0;
+SET LOCAL jit_inline_above_cost = 8000000000;
+SET LOCAL jit_optimize_above_cost = 8000000000;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+SET LOCAL enable_seqscan = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT tableoid FROM jittest_simple;
+COMMIT;
+
+-- check that inner/outer tuple deforming can be inferred for upper nodes, join case
+BEGIN;
+SET LOCAL enable_hashjoin = true;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = true;
+SET LOCAL enable_nestloop = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+COMMIT;
+BEGIN;
+SET LOCAL enable_hashjoin = false;
+SET LOCAL enable_mergejoin = false;
+SET LOCAL enable_nestloop = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+SELECT a.data || b.data FROM jittest_simple a JOIN jittest_simple b USING(id);
+COMMIT;
+
+-- check that inner/outer tuple deforming can be inferred for upper nodes, agg case
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+SELECT count(*), count(data), string_agg(data, ':') FROM jittest_simple;
+
+-- Check that the equality hash-table function in a hash-aggregate can
+-- be accelerated.
+BEGIN;
+SET LOCAL enable_hashagg = true;
+SET LOCAL enable_sort = false;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+END;
+
+-- Unfortunately for sort based aggregates, the group comparison
+-- function can current not be JITed
+BEGIN;
+SET LOCAL enable_hashagg = false;
+SET LOCAL enable_sort = true;
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS) SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+SELECT data, string_agg(id::text, ', ') FROM jittest_simple GROUP BY data;
+END;
+
+-- check that EXPLAIN ANALYZE output is reproducible with the right options
+EXPLAIN (VERBOSE, COSTS OFF, JIT_DETAILS, ANALYZE, TIMING OFF, SUMMARY OFF) SELECT tableoid FROM jittest_simple;
+
+DROP TABLE jittest_simple;
--
2.23.0.385.gbc12974a89
On Mon, Oct 28, 2019 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:
What I dislike about that is that it basically again is introducing
"again"? Am I missing some history here? I'd love to read up on this
if there are mistakes to learn from.
something that requires either pattern matching on key names (i.e. a key
of '(.*) JIT' is one that has information about JIT, and the associated
expresssion is in key $1), or knowing all the potential keys an
expression could be in.
That still seems less awkward than having to handle a Filter field
that's either scalar or a group. Most current EXPLAIN options just add
additional fields to the structured plan instead of modifying it, no?
If that output is better enough, though, maybe we should just always
make Filter a group and go with the breaking change? If tooling
authors need to treat this case specially anyway, might as well evolve
the format.
Another alternative would be to just remove the 'Output' line when a
node doesn't project - it can't really carry meaning in those cases
anyway?
¯\_(ツ)_/¯
For what it's worth, I certainly wouldn't miss it.
Hi,
On 2019-11-12 13:42:10 -0800, Maciek Sakrejda wrote:
On Mon, Oct 28, 2019 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:
What I dislike about that is that it basically again is introducing
"again"? Am I missing some history here? I'd love to read up on this
if there are mistakes to learn from.
I think I was mostly referring to mistakes we've made for the json etc
key names. By e.g. having expressions as "Function Call", "Table
Function Call", "Filter", "TID Cond", ... a tool that wants to interpret
the output needs awareness of all of these different names, rather than
knowing that everything with a sub-group "Expression" has to be an
expression.
I.e. instead of
"Plan": {
"Node Type": "Seq Scan",
"Parallel Aware": false,
"Relation Name": "pg_class",
"Schema": "pg_catalog",
"Alias": "pg_class",
"Startup Cost": 0.00,
"Total Cost": 17.82,
"Plan Rows": 385,
"Plan Width": 68,
"Output": ["relname", "tableoid"],
"Filter": "(pg_class.relname <> 'foo'::name)"
}
we ought to have gone for
"Plan": {
"Node Type": "Seq Scan",
"Parallel Aware": false,
"Relation Name": "pg_class",
"Schema": "pg_catalog",
"Alias": "pg_class",
"Startup Cost": 0.00,
"Total Cost": 17.82,
"Plan Rows": 385,
"Plan Width": 68,
"Output": ["relname", "tableoid"],
"Filter": {"Expression" : { "text": (pg_class.relname <> 'foo'::name)"}}
}
or something like that. Which'd then make it obvious how to add
information about JIT to each expression.
Whereas the proposal of the separate key name perpetuates the
messiness...
something that requires either pattern matching on key names (i.e. a key
of '(.*) JIT' is one that has information about JIT, and the associated
expresssion is in key $1), or knowing all the potential keys an
expression could be in.That still seems less awkward than having to handle a Filter field
that's either scalar or a group.
Yea, it's a sucky option :(
Most current EXPLAIN options just add
additional fields to the structured plan instead of modifying it, no?
If that output is better enough, though, maybe we should just always
make Filter a group and go with the breaking change? If tooling
authors need to treat this case specially anyway, might as well evolve
the format.
Yea, maybe that's the right thing to do. Would be nice to have some more
input...
Greetings,
Andres Freund
On Mon, Oct 28, 2019 at 7:21 PM Andres Freund <andres@anarazel.de> wrote:
Because that's the normal way to represent something non-existing for
formats like json? There's a lot of information we show always for !text
format, even if not really applicable to the context (e.g. Triggers for
select statements). I think there's an argument to made to deviate in
this case, but I don't think it's obvious.
I've consistently been of the view that anyone who thinks that the
FORMAT option should affect what information gets displayed doesn't
understand the meaning of the word "format." And I still feel that
way.
I also think that conditionally renaming "Output" to "Project" is a
super-bad idea. The idea of a format like this is that the "keys" stay
constant and the values change. If you need to tell people more, you
add more keys.
I also think that making the Filter field a group conditionally is a
bad idea, for similar reasons. But making it always be a group doesn't
necessarily seem like a bad idea. I think, though, that you could
handle this in other ways, like by suffixing existing keys. e.g. if
you've got Index-Qual and Filter, just do Index-Qual-JIT and
Filter-JIT and call it good.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Hi,
On 2019-11-13 14:29:07 -0500, Robert Haas wrote:
On Mon, Oct 28, 2019 at 7:21 PM Andres Freund <andres@anarazel.de> wrote:
Because that's the normal way to represent something non-existing for
formats like json? There's a lot of information we show always for !text
format, even if not really applicable to the context (e.g. Triggers for
select statements). I think there's an argument to made to deviate in
this case, but I don't think it's obvious.I've consistently been of the view that anyone who thinks that the
FORMAT option should affect what information gets displayed doesn't
understand the meaning of the word "format." And I still feel that
way.
Well, it's not been that way since the format option was added, so ...
I also think that conditionally renaming "Output" to "Project" is a
super-bad idea. The idea of a format like this is that the "keys" stay
constant and the values change. If you need to tell people more, you
add more keys.
Yea, I don't like the compat break either. But I'm not so convinced
that just continuing to collect cruft because of compatibility is worth
it - I just don't see an all that high reliance interest for explain
output.
I think adding a new key is somewhat ok for !text, but for text that
doesn't seem like an easy solution?
I kind of like my idea somewhere downthread, in a reply to Maciek, of
simply not listing "Output" for nodes that don't project. While that's
still a format break, it seems that tools already need to deal with
"Output" not being present?
I also think that making the Filter field a group conditionally is a
bad idea, for similar reasons.
Oh, yea, it's utterly terrible (I called it crappy in my email :)).
But making it always be a group doesn't necessarily seem like a bad
idea. I think, though, that you could handle this in other ways, like
by suffixing existing keys. e.g. if you've got Index-Qual and Filter,
just do Index-Qual-JIT and Filter-JIT and call it good.
Maciek suggested the same. But to me it seems going down that way will
make the format harder and harder to understand? So I think I'd rather
break compat here, and go for a group.
Personally I think the group naming choice for explain makes the the
!text outputs much less useful than they could be - we basically force
every tool to understand all possible keys, to make sense of formatted
output. Instead of something like 'Filter: {"Qual":{"text" : "...",
"JIT": ...}' where a tool only needed to understand that everything that
has a "Qual" inside is a filtering expression, everything that has a
"Project" is a projecting type of expression, ... a tool needs to know
about "Inner Cond", "Order By", "Filter", "Recheck Cond", "TID Cond",
"Join Filter", "Merge Cond", "Hash Cond", "One-Time Filter", ...
Greetings,
Andres Freund
On Wed, Nov 13, 2019 at 3:03 PM Andres Freund <andres@anarazel.de> wrote:
Well, it's not been that way since the format option was added, so ...
It was pretty close in the original version, but people keep trying to
be clever.
I also think that conditionally renaming "Output" to "Project" is a
super-bad idea. The idea of a format like this is that the "keys" stay
constant and the values change. If you need to tell people more, you
add more keys.Yea, I don't like the compat break either. But I'm not so convinced
that just continuing to collect cruft because of compatibility is worth
it - I just don't see an all that high reliance interest for explain
output.I think adding a new key is somewhat ok for !text, but for text that
doesn't seem like an easy solution?I kind of like my idea somewhere downthread, in a reply to Maciek, of
simply not listing "Output" for nodes that don't project. While that's
still a format break, it seems that tools already need to deal with
"Output" not being present?
Yes, I think leaving out Output for a node that doesn't Project would
be fine, as long as we're consistent about it.
But making it always be a group doesn't necessarily seem like a bad
idea. I think, though, that you could handle this in other ways, like
by suffixing existing keys. e.g. if you've got Index-Qual and Filter,
just do Index-Qual-JIT and Filter-JIT and call it good.Maciek suggested the same. But to me it seems going down that way will
make the format harder and harder to understand? So I think I'd rather
break compat here, and go for a group.
Personally, I don't care very much about backward-compatibility, or
about how hard it is for tools to parse. I want it to be possible, but
if it takes a little extra effort, so be it. My main concern is having
the text output look good to human beings, because that is the primary
format and they are the primary consumers.
Personally I think the group naming choice for explain makes the the
!text outputs much less useful than they could be - we basically force
every tool to understand all possible keys, to make sense of formatted
output. Instead of something like 'Filter: {"Qual":{"text" : "...",
"JIT": ...}' where a tool only needed to understand that everything that
has a "Qual" inside is a filtering expression, everything that has a
"Project" is a projecting type of expression, ... a tool needs to know
about "Inner Cond", "Order By", "Filter", "Recheck Cond", "TID Cond",
"Join Filter", "Merge Cond", "Hash Cond", "One-Time Filter", ...
It's not that long of a list, and I don't know of a tool that tries to
do something in particular with all of those types of things anyway.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:
Personally, I don't care very much about backward-compatibility, or
about how hard it is for tools to parse. I want it to be possible, but
if it takes a little extra effort, so be it.
I think these are two separate issues. I agree on
backward-compatibility (especially if we can embed a server version in
structured EXPLAIN output to make it easier for tools to track format
differences), but not caring how hard it is for tools to parse? What's
the point of structured formats, then?
My main concern is having
the text output look good to human beings, because that is the primary
format and they are the primary consumers.
Structured output is also for human beings, albeit indirectly. That
text is the primary format may be more of a reflection of the
difficulty of building and integrating EXPLAIN tools than its inherent
superiority (that said, I'll concede it's a concise and elegant format
for what it does). What if psql supported an EXPLAINER like it does
EDITOR?
For what it's worth, after thinking about this a bit, I'd like to see
structured EXPLAIN evolve into a more consistent format, even if it
means breaking changes (and I do think a version specifier at the root
of the plan would make this easier).
Maciek Sakrejda <m.sakrejda@gmail.com> writes:
On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:
Personally, I don't care very much about backward-compatibility, or
about how hard it is for tools to parse. I want it to be possible, but
if it takes a little extra effort, so be it.
I think these are two separate issues. I agree on
backward-compatibility (especially if we can embed a server version in
structured EXPLAIN output to make it easier for tools to track format
differences), but not caring how hard it is for tools to parse? What's
the point of structured formats, then?
I'd not been paying any attention to this thread, but Andres just
referenced it in another discussion, so I went back and read it.
Here's my two cents:
* I agree with Robert that conditionally changing "Output" to "Project" is
an absolutely horrid idea. That will break every tool that looks at this
stuff, and it just flies in the face of the design principle that the
output schema should be stable, and it'll be a long term pain-in-the-rear
for regression test back-patching, and it will confuse users much more than
it will help them. The other idea of suppressing "Output" in cases where
no projection is happening might be all right, but only in text format
where we don't worry about schema stability. Another idea perhaps is
to emit "Output: all columns" (in text formats, less sure what to do in
structured formats).
* In the structured formats, I think it should be okay to convert
expression-ish fields from being raw strings to being {Expression}
sub-nodes with the raw string as one field. Aside from making it easy
to inject JIT info, that would also open the door to someday showing
expressions in some more-parse-able format than a string, since other
representations could also be added as new fields. (I have a vague
recollection of wanting a list of all the Vars used in an expression,
for example.)
* Unfortunately that does nothing for the problem of how to show
per-expression JIT info in text format. Maybe we just shouldn't.
I do not think that the readability-vs-usefulness tradeoff is going
to be all that good there, anyway. Certainly for testing purposes
it's going to be more useful to examine portions of a structured output.
* I'm not on board with the idea of adding a version number to the
structured output formats. In the first place, it's too late, since
we didn't leave room for one to begin with. In the second, an overall
version number just isn't very helpful for this sort of problem. If a
tool sees a version number higher than the latest thing it knows, what's
it supposed to do, just fail? In practice it could still extract an awful
lot of info, so that really isn't a desirable answer. It's better if the
data structure is such that a tool can understand that some sub-part of
the data is something it can't interpret, and just ignore that part.
(This is more or less the same design principle that PNG image format
was built on, FWIW.) Adding on fields to an existing node type easily
meets that requirement, as does inventing new sub-node types, and that's
all that we've done so far. But I think that replacing a scalar field
value with a sub-node probably works too (at least for well-written
tools), so the expression change suggested above should be OK.
regards, tom lane
Hi,
On 2020-01-27 12:15:53 -0500, Tom Lane wrote:
Maciek Sakrejda <m.sakrejda@gmail.com> writes:
On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:
Personally, I don't care very much about backward-compatibility, or
about how hard it is for tools to parse. I want it to be possible, but
if it takes a little extra effort, so be it.I think these are two separate issues. I agree on
backward-compatibility (especially if we can embed a server version in
structured EXPLAIN output to make it easier for tools to track format
differences), but not caring how hard it is for tools to parse? What's
the point of structured formats, then?I'd not been paying any attention to this thread, but Andres just
referenced it in another discussion, so I went back and read it.
Here's my two cents:* I agree with Robert that conditionally changing "Output" to "Project" is
an absolutely horrid idea.
Yea, I think I'm convinced on that front. I never liked the idea, and
the opposition has been pretty unanimous...
That will break every tool that looks at this stuff, and it just flies
in the face of the design principle that the output schema should be
stable, and it'll be a long term pain-in-the-rear for regression test
back-patching, and it will confuse users much more than it will help
them. The other idea of suppressing "Output" in cases where no
projection is happening might be all right, but only in text format
where we don't worry about schema stability. Another idea perhaps is
to emit "Output: all columns" (in text formats, less sure what to do
in structured formats).
I think I like the "all columns" idea. Not what I'd do on a green field,
but...
If we were just dealing with the XML format, we could just add a
<Projecting>True/False</Projecting>
to the current
<Output>
<Item>a</Item>
<Item>b</Item>
...
</Output>
and it'd make plenty sense. but for json's
"Output": ["a", "b"]
and yaml's
Output:
- "a"
- "b"
that's not an option as far as I can tell. Not sure what to do about
that.
* In the structured formats, I think it should be okay to convert
expression-ish fields from being raw strings to being {Expression}
sub-nodes with the raw string as one field. Aside from making it easy
to inject JIT info, that would also open the door to someday showing
expressions in some more-parse-able format than a string, since other
representations could also be added as new fields. (I have a vague
recollection of wanting a list of all the Vars used in an expression,
for example.)
Cool. Being extendable seems like a good direction. That's what I
primarily dislike about the various work-arounds for how to associate
information about JIT by a "related" name.
That'd e.g. open the door to have both a normalized and an original
expression in the explain output. Which would be quite valuable for
some monitoring tools.
* Unfortunately that does nothing for the problem of how to show
per-expression JIT info in text format. Maybe we just shouldn't.
I do not think that the readability-vs-usefulness tradeoff is going
to be all that good there, anyway. Certainly for testing purposes
it's going to be more useful to examine portions of a structured output.
I think I can live with that, I don't think it's going to be a very
commonly used option. It's basically useful for regression tests, JIT
improvements, and people that want to see whether they can change their
query / schema to make better use of JIT - the latter category won't be
many, I think.
Since this is going to be a default off option anyway, I don't think
we'd need to be as concerned with compatibility. But even leaving
compatibility aside, it's not that clear how to best attach information
in the current text format, without being confusing.
Greetings,
Andres Freund
On Fri, Nov 15, 2019 at 8:05 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:
On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:
Personally, I don't care very much about backward-compatibility, or
about how hard it is for tools to parse. I want it to be possible, but
if it takes a little extra effort, so be it.I think these are two separate issues. I agree on
backward-compatibility (especially if we can embed a server version in
structured EXPLAIN output to make it easier for tools to track format
differences), but not caring how hard it is for tools to parse? What's
the point of structured formats, then?
To make the data easy to parse. :-)
I mean, it's clear that, on the one hand, having a format like JSON
that, as has recently been pointed out elsewhere, is parsable by a
wide variety of tools, is advantageous. However, I don't think it
really matters whether the somebody's got to look at a tag called
Flump and match it up with the data in another tag called JIT-Flump,
or whether there's a Flump group that has RegularStuff and JIT tags
inside of it. There's just not much difference in the effort involved.
Being able to parse the JSON or XML using generic code is enough of a
win that the details shouldn't matter that much.
I think if you were going to complain about the limitations of our
current EXPLAIN output format, it'd make a lot more sense to focus on
the way we output expressions. If you want to mechanically parse one
of those expressions and figure out what it's doing - what functions
or operators are involved, and to what they are being applied - you
are probably out of luck altogether, and you are certainly not going
to have an easy time of it. I'm not saying we have to solve that
problem, but I believe it's a much bigger nuisance than the sort of
thing we are talking about here.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Mon, Jan 27, 2020 at 12:41 PM Andres Freund <andres@anarazel.de> wrote:
I do not think that the readability-vs-usefulness tradeoff is going
to be all that good there, anyway. Certainly for testing purposes
it's going to be more useful to examine portions of a structured output.I think I can live with that, I don't think it's going to be a very
commonly used option. It's basically useful for regression tests, JIT
improvements, and people that want to see whether they can change their
query / schema to make better use of JIT - the latter category won't be
many, I think.
I intensely dislike having information that we can't show in the text
format, or really, that we can't show in every format.
I might be outvoted, but I stand by that position.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
I do not think that the readability-vs-usefulness tradeoff is going
to be all that good there, anyway. Certainly for testing purposes
it's going to be more useful to examine portions of a structured output.
I intensely dislike having information that we can't show in the text
format, or really, that we can't show in every format.
Well, if it's relegated to a "jit = detail" option or some such,
the readability objection could be overcome. But I'm still not clear
on how you'd physically wedge it into the output, at least not in a way
that matches up with the proposal that non-text modes handle this stuff
by producing sub-nodes for the existing types of expression fields.
regards, tom lane
On Mon, Jan 27, 2020 at 11:01 AM Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Nov 15, 2019 at 8:05 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:
On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:
Personally, I don't care very much about backward-compatibility, or
about how hard it is for tools to parse. I want it to be possible, but
if it takes a little extra effort, so be it.I think these are two separate issues. I agree on
backward-compatibility (especially if we can embed a server version in
structured EXPLAIN output to make it easier for tools to track format
differences), but not caring how hard it is for tools to parse? What's
the point of structured formats, then?To make the data easy to parse. :-)
I mean, it's clear that, on the one hand, having a format like JSON
that, as has recently been pointed out elsewhere, is parsable by a
wide variety of tools, is advantageous. However, I don't think it
really matters whether the somebody's got to look at a tag called
Flump and match it up with the data in another tag called JIT-Flump,
or whether there's a Flump group that has RegularStuff and JIT tags
inside of it. There's just not much difference in the effort involved.
Being able to parse the JSON or XML using generic code is enough of a
win that the details shouldn't matter that much.
Having a structured EXPLAIN schema that's semantically consistent is
still valuable. At the end of the day, it's humans who are writing the
tools that consume that structured output. Given the sparse structured
EXPLAIN schema documentation, as someone who currently works on
EXPLAIN tooling, I'd prefer a trend toward consistency at the expense
of backward compatibility. (Of course, we should avoid gratuitous
changes.)
But I take back the version number suggestion after reading Tom's
response; that was naïve.
I think if you were going to complain about the limitations of our
current EXPLAIN output format, it'd make a lot more sense to focus on
the way we output expressions.
That would be nice to have, but for what it's worth, my main complaint
would be about documentation (especially around structured formats).
The "Using EXPLAIN" section covers the basics, but understanding what
node types exist, and what fields show up for what nodes and what they
mean--that seems to be a big missing piece (I don't feel entitled to
this documentation; as a structured format consumer, I'm just pointing
out a deficiency). Contrast that with the great wire protocol
documentation. In some ways it's easier to work on native drivers than
on EXPLAIN tooling because the docs are thorough and well organized.
On Mon, Jan 27, 2020 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
I do not think that the readability-vs-usefulness tradeoff is going
to be all that good there, anyway. Certainly for testing purposes
it's going to be more useful to examine portions of a structured output.I intensely dislike having information that we can't show in the text
format, or really, that we can't show in every format.Well, if it's relegated to a "jit = detail" option or some such,
the readability objection could be overcome. But I'm still not clear
on how you'd physically wedge it into the output, at least not in a way
that matches up with the proposal that non-text modes handle this stuff
by producing sub-nodes for the existing types of expression fields.
Well, remember that the text format was the original format. The whole
idea of "groups" was an anachronism that I imposed on the text format
to make it possible to add other formats. It wasn't entirely natural,
because the text format basically indicated nesting by indentation,
and that wasn't going to work for XML or JSON. The text format also
felt free to repeat elements and assume the reader would figure it
out; repeating elements is OK in XML in general, but in JSON it's only
OK if the surrounding context is an array rather than an object.
Anyway, the point is that I (necessarily) started with whatever we had
and found a way to fit it into a structure. It seems like it ought to
be possible to go the other direction also, and figure out how to make
the structured data look OK as text.
Here's Andres's original example:
"Filter": {
"Expr": "(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)",
"JIT-Expr": "evalexpr_0_2",
"JIT-Deform-Scan": "deform_0_3",
"JIT-Deform-Outer": null,
"JIT-Deform-Inner": null
}
Right now we show:
Filter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)
Andres proposed:
Filter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone); JIT-Expr: evalexpr_0_2, JIT-Deform-Scan:
deform_0_3
That's not ideal because it's all on one line, but that could be changed:
Filter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)
JIT-Expr: evalexpr_0_2
JIT-Deform-Scan: deform_0_3
I would propose either including null all the time or omitting it all
the time, so that we would either change the JSON output to...
"Filter": {
"Expr": "(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)",
"JIT-Expr": "evalexpr_0_2",
"JIT-Deform-Scan": "deform_0_3"
}
Or the text output to:
Filter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp
without time zone)
JIT-Expr: evalexpr_0_2
JIT-Deform-Scan: deform_0_3
JIT-Deform-Outer: null
JIT-Deform-Inner: null
You could argue that this is inconsistent because the JSON format
shows a bunch of keys that are essentially parallel, and this text
format makes the Expr key essentially the primary value and the others
secondary. But since the text format is for human beings, and since
human beings are likely to find the Expr key to be the primary piece
of information, maybe that's totally fine.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company