Gather Merge
Hi hackers,
Attached is the patch to implement Gather Merge. The Gather Merge node would
assume that the results from each worker are ordered with respect to each
other,
and then do a final merge pass over those. This is so that we get the
top-level
query ordering we want. The final plan for such a query would look something
like this:
Gather Merge
-> Sort
-> Parallel Seq Scan on foo
Filter: something
With this we now have a new parallel node which will always return the
sorted
results, so that any query having the pathkey can build the gather merge
path.
Currently if a query has a pathkey and we want to make it parallel-aware,
the
plan would be something like this:
Sort
-> Gather
-> Parallel Seq Scan on foo
Filter: something
With GatherMerge now it is also possible to have plan like:
Finalize GroupAggregate
-> Gather Merge
-> Partial GroupAggregate
-> Sort
With gather merge, sort can be pushed under the Gather Merge. It's valuable
as it has very good performance benefits. Consider the following test
results:
1) ./db/bin/pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
Without patch:
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
----------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=81696.51..85242.09 rows=202605 width=12) (actual
time=1037.212..1162.086 rows=200000 loops=1)
Group Key: aid
-> Sort (cost=81696.51..82203.02 rows=202605 width=8) (actual
time=1037.203..1072.446 rows=200000 loops=1)
Sort Key: aid
Sort Method: external sort Disk: 3520kB
-> Seq Scan on pgbench_accounts (cost=0.00..61066.59 rows=202605
width=8) (actual time=801.398..868.390 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.133 ms
Execution time: 1171.656 ms
(10 rows)
With Patch:
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=47274.13..56644.58 rows=202605 width=12)
(actual time=315.457..561.825 rows=200000 loops=1)
Group Key: aid
-> Gather Merge (cost=47274.13..54365.27 rows=50651 width=0) (actual
time=315.451..451.886 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Partial GroupAggregate (cost=46274.09..47160.49 rows=50651
width=12) (actual time=306.830..333.908 rows=40000 loops=5)
Group Key: aid
-> Sort (cost=46274.09..46400.72 rows=50651 width=8)
(actual time=306.822..310.800 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 2543kB
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..42316.15 rows=50651 width=8) (actual time=237.552..255.968
rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.200 ms
Execution time: 572.221 ms
(15 rows)
I ran the TPCH benchmark queries with the patch and found that 7 out of 22
queries end up picking the Gather Merge path.
Below benchmark numbers taken under following configuration:
- Scale factor = 10
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- PFA performance machine cpu into (benchmark_machine_info.txt)
Query 4: With GM 7901.480 -> Without GM 9064.776
Query 5: With GM 53452.126 -> Without GM 55059.511
Query 9: With GM 52613.132 -> Without GM 98206.793
Query 15: With GM 68051.058 -> Without GM 68918.378
Query 17: With GM 129236.075 -> Without GM 160451.094
Query 20: With GM 259144.232 -> Without GM 306256.322
Query 21: With GM 153483.497 -> Without GM 168169.916
Here from the results we can see that query 9, 17 and 20 are the one which
show good performance benefit with the Gather Merge.
PFA tpch_result.tar.gz for the explain analysis output for TPCH queries
(with
and without patch)
I ran the TPCH benchmark queries with different number of workers and found
that
Query 18 also started picking Gather merge with worker > 6. PFA attach
TPCH_GatherMerge.pdf for the detail benchmark results.
Implementation details:
New Gather Merge node:
The patch introduces a new node type for Gather Merge. The Gather Merge
implementation is mostly similar to what Gather does. The major difference
is
that the Gather node has two TupleTableSlots; one for leader and one for the
tuple read from the worker, and Gather Merge has a TupleTableSlot per
worker,
plus one for the leader. As for Gather Merge, we need to fill every slot,
then
build a heap of the tuples and return the lowest one.
The patch generates the gather merge path from:
a) create_ordered_paths() for partial_pathlist. If the pathkey contain the
sort_pathkey, then directly create the gather merge. If not then create sort
and then create the gather merge path.
Example:
explain analyze
select * from pgbench_accounts where filler like '%foo%' order by aid;
b) create_distinct_paths(): when sort-based implementations of DISTINCT is
possible.
Example:
explain analyze
select distinct * from pgbench_accounts where filler like '%foo%' order by
aid;
c) create_grouping_paths() : While generating a complete GroupAgg Path, loop
over the partial path list and if partial path contains the group_pathkeys
generate the gather merge path.
Example:
explain analyze
select * from pgbench_accounts where filler like '%foo%' group by aid;
In all the above mentioned cases, with the patches it's giving almost a 2x
performance gain. PFA pgbench_query.out, for the explain analyze output for
the
queries.
Gather Merge reads the tuple from each queue and then picks the lowest one,
so
every time it has to read the tuple from the queue into wait mode. During
testing I found that some of the query spent some time reading tuple
from the queue. So in the patch I introduced the tuple array; once we get
the tuple into wait mode, it tries to read more tuples in nowait mode and
store it into the tuple array. Once we get one tuple through the queue,
there
are chances to have more ready tuple into queue, so just read it and, if
any,
store it to the tuple array. With this I found good performance benefits
with
some of the TPC-H complex queries.
Costing:
GatherMerge merges several pre-sorted input streams using a heap.
Considering
that costing for Gather Merge is the combination of cost_gather +
cost_merge_append.
Open Issue:
- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge that
is
not possible, as we storing the tuple into Tuple array and we want tuple to
be
free only its get pass through the merge sort algorithm. One idea is, we can
also call gm_readnext_tuple() under per-tuple context (which will fix the
leak
into tqueue.c) and then store the copy of tuple into tuple array.
- Need to see a way to add the simple test as part of regression.
Thanks to my colleague Robert Haas for his help in design and some
preliminary review for the patch.
Please let me know your thought, and thanks for reading.
Regards,
Rushabh Lathia
www.EnterpriseDB.com
Attachments:
gather_merge_v1.patchapplication/x-download; name=gather_merge_v1.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 1247433..cb0299a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..fd884a8
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,693 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the MergeAppend node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ * ExecEndGatherMerge - shut down the MergeAppend node
+ * ExecReScanGatherMerge - rescan the MergeAppend node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTuple
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTuple;
+
+/* Tuple array size */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState * gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool force, bool *done);
+static void gather_merge_init(GatherMergeState * gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState * node);
+static bool gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force);
+static void fill_tuple_array(GatherMergeState * gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge * node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ gm_state->funnel_slot = ExecInitExtraTupleSlot(estate);
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * Initialize funnel slot to same tuple descriptor as outer plan.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ ExecSetSlotDescriptor(gm_state->funnel_slot, tupDesc);
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState * node)
+{
+ TupleTableSlot *fslot = node->funnel_slot;
+ int i;
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+
+ /*
+ * Initialize the parallel context and workers on first execution. We do
+ * this on first execution rather than during node initialization, as it
+ * needs to allocate large dynamic segment, so it is better to do if it is
+ * really needed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+ bool got_any_worker = false;
+
+ /* Initialize the workers required to execute Gather node. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /*
+ * Register backend workers. We might not get as many as we
+ * requested, or indeed any at all.
+ */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader =
+ palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ if (pcxt->worker[i].bgwhandle == NULL)
+ continue;
+
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ fslot->tts_tupleDescriptor);
+ node->nreaders++;
+ got_any_worker = true;
+ }
+ }
+
+ /* No workers? Then never mind. */
+ if (!got_any_worker ||
+ node->nreaders < 2)
+ {
+ ExecShutdownGatherMergeWorkers(node);
+ node->nreaders = 0;
+ }
+ }
+
+ /* always allow leader to participate into gather merge */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState * node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState * node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState * node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState * node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull atleast single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState * gm_state)
+{
+ TupleTableSlot *fslot = gm_state->funnel_slot;
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple = (GMReaderTuple *) palloc0(sizeof(GMReaderTuple) * (gm_state->nreaders));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ fslot->tts_tupleDescriptor);
+ }
+
+ /* Allocate the resources for the sort */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+ /*
+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.
+ * After this if all worker unable to produce the tuple, then re-read and
+ * this time read the tuple into wait mode. For the worker, which was able
+ * to produced single tuple in the earlier loop, just fill the tuple array
+ * if more tuples available.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty)
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ fill_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_isempty)
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState * gm_state)
+{
+ TupleTableSlot *fslot = gm_state->funnel_slot;
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, true))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return ExecClearTuple(fslot);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return ExecClearTuple(fslot);
+}
+
+/*
+ * Read the tuple for given reader into nowait mode, and fill the tuple array.
+ */
+static void
+fill_tuple_array(GatherMergeState * gm_state, int reader)
+{
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (gm_tuple->nTuples == gm_tuple->readCounter)
+ gm_tuple->nTuples = gm_tuple->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (gm_tuple->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = gm_tuple->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ gm_tuple->tuple[i] = gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &gm_tuple->done);
+ if (!HeapTupleIsValid(gm_tuple->tuple[i]))
+ break;
+ gm_tuple->nTuples++;
+ }
+}
+
+/*
+ * Function attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * fill the tuple array.
+ *
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force)
+{
+ HeapTuple tup = NULL;
+
+ /* We here for leader? */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+ /* Does tuple array have any avaiable tuples? */
+ else if (gm_state->gm_tuple[reader].nTuples >
+ gm_state->gm_tuple[reader].readCounter)
+ {
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+
+ tup = gm_tuple->tuple[gm_tuple->readCounter++];
+ }
+ /* reader exhausted? */
+ else if (gm_state->gm_tuple[reader].done)
+ {
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+ * try to read more tuple into nowait mode and store it into the tuple
+ * array.
+ */
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool force, bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+ tup = TupleQueueReaderNext(reader, force ? false : true, done);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..5dea0f7 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,18 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+ WRITE_BOOL_FIELD(single_copy);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3363,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3649,6 +3693,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..5dbb83e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,70 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = path->subpath->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. For Gather Merge, require tuple
+ * to be read into wait mode from each worker, so considering some extra
+ * cost for the same.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 47158f6..96bed2e 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,11 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+ int nworkers, bool single_copy,
+ Plan *subplan);
/*
@@ -463,6 +468,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -2246,6 +2255,90 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
+/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ gm_plan = make_gather_merge(subplan->targetlist,
+ NIL,
+ best_path->num_workers,
+ best_path->single_copy,
+ subplan);
+
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ if (pathkeys)
+ {
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /*
+ * Check that we got the same sort key information. We just Assert
+ * that the sortops match, since those depend only on the pathkeys;
+ * but it seems like a good idea to check the sort column numbers
+ * explicitly, to ensure the tlists really do match up.
+ */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ gm_plan->plan.lefttree = subplan;
+ }
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
/*****************************************************************************
*
@@ -5902,6 +5995,26 @@ make_gather(List *qptlist,
return node;
}
+static GatherMerge *
+make_gather_merge(List *qptlist,
+ List *qpqual,
+ int nworkers,
+ bool single_copy,
+ Plan *subplan)
+{
+ GatherMerge *node = makeNode(GatherMerge);
+ Plan *plan = &node->plan;
+
+ /* cost should be inserted by caller */
+ plan->targetlist = qptlist;
+ plan->qual = qpqual;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+ node->num_workers = nworkers;
+
+ return node;
+}
+
/*
* distinctList is a list of SortGroupClauses, identifying the targetlist
* items that should be considered by the SetOp filter. The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f657ffc..7339f03 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3719,14 +3719,59 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We generate a Gather path based on the cheapest partial path,
+ * and a GatherMerge path for each partial path that is properly sorted.
*/
if (grouped_rel->partial_pathlist)
{
Path *path = (Path *) linitial(grouped_rel->partial_pathlist);
double total_groups = path->rows * path->parallel_workers;
+ /*
+ * GatherMerge is always sorted, so if there is GROUP BY clause,
+ * try to generate GatherMerge path for each partial path.
+ */
+ if (parse->groupClause)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *gmpath = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+ continue;
+
+ /* create gather merge path */
+ gmpath = (Path *) create_gather_merge_path(root,
+ grouped_rel,
+ gmpath,
+ NULL,
+ root->group_pathkeys,
+ NULL);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
+
path = (Path *) create_gather_path(root,
grouped_rel,
path,
@@ -3864,6 +3909,12 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * Partial pathlist generated for grouped relation are no further useful,
+ * so just reset it with null.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4160,6 +4211,36 @@ create_distinct_paths(PlannerInfo *root,
}
}
+ /*
+ * Generate GatherMerge path for each partial path.
+ */
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+ {
+ path = (Path *) create_sort_path(root, distinct_rel,
+ path,
+ needed_pathkeys,
+ -1.0);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ distinct_rel,
+ path,
+ NULL,
+ needed_pathkeys,
+ NULL);
+ add_path(distinct_rel, (Path *)
+ create_upper_unique_path(root,
+ distinct_rel,
+ path,
+ list_length(root->distinct_pathkeys),
+ numDistinctRows));
+ }
+
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
@@ -4304,6 +4385,39 @@ create_ordered_paths(PlannerInfo *root,
ordered_rel->useridiscurrent = input_rel->useridiscurrent;
ordered_rel->fdwroutine = input_rel->fdwroutine;
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ bool is_sorted;
+
+ is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+ path->pathkeys);
+ if (!is_sorted)
+ {
+ /* An explicit sort here can take advantage of LIMIT */
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ path,
+ root->sort_pathkeys,
+ limit_tuples);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ ordered_rel,
+ path,
+ target,
+ root->sort_pathkeys,
+ NULL);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+
foreach(lc, input_rel->pathlist)
{
Path *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..f83cd77 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ 0 /* FIXME: pathnode->limit_tuples*/);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 622279b..502f17d 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 4fa3661..54d929f 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1963,6 +1963,33 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleTableSlot *funnel_slot;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTuple *gm_tuple; /* array of lenght nreaders + leader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..dfaca79 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. FIXME: comments
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+ bool single_copy; /* path must not be executed >1x */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..cd48cc4 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,8 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,10 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
List **restrict_clauses);
extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel, Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer);
#endif /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
On Wed, Oct 5, 2016 at 11:35 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
Hi hackers,
Attached is the patch to implement Gather Merge.
Couple of review comments:
1.
ExecGatherMerge()
{
..
+ /* No workers? Then never mind. */
+ if (!got_any_worker
||
+ node->nreaders < 2)
+ {
+
ExecShutdownGatherMergeWorkers(node);
+ node->nreaders = 0;
+
}
Are you planning to restrict the use of gather merge based on number
of workers, if there is a valid reason, then I think comments should
be updated for same?
2.
+ExecGatherMerge(GatherMergeState * node){
..
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+
GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we
might have to run without parallelism; but if parallel
+ * mode is active then we can try to
fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+
{
+ ParallelContext *pcxt;
+ bool got_any_worker =
false;
+
+ /* Initialize the workers required to execute Gather node. */
+
if (!node->pei)
+ node->pei = ExecInitParallelPlan(node-
ps.lefttree,
+
estate,
+
gm->num_workers);
..
}
There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?
I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.
3.
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force)
{
..
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+
* try to read more tuple into nowait mode and store it into the tuple
+ * array.
+
*/
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);
How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().
4.
+create_gather_merge_path(..)
{
..
+ 0 /* FIXME: pathnode->limit_tuples*/);
What exactly you want to fix in above code?
5.
+/* Tuple array size */
+#define MAX_TUPLE_STORE 10
Have you tried with other values of MAX_TUPLE_STORE? If yes, then
what are the results? I think it is better to add a comment why array
size is best for performance.
6.
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the MergeAppend
node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ *
ExecEndGatherMerge - shut down the MergeAppend node
+ *
ExecReScanGatherMerge - rescan the MergeAppend node
typo. /MergeAppend/GatherMerge
7.
+static TupleTableSlot *gather_merge_getnext(GatherMergeState * gm_state);
+static HeapTuple
gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool
force, bool *done);
Formatting near GatherMergeState doesn't seem to be appropriate. I
think you need to add GatherMergeState in typedefs.list and then run
pgindent again.
8.
+ /*
+ * Initialize funnel slot to same tuple descriptor as outer plan.
+ */
+ if
(!ExecContextForcesOids(&gm_state->ps, &hasoid))
I think in above comment, you mean Initialize GatherMerge slot.
9.
+ /* Does tuple array have any avaiable tuples? */
/avaiable/available
Open Issue:
- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge that
is
not possible, as we storing the tuple into Tuple array and we want tuple to
be
free only its get pass through the merge sort algorithm. One idea is, we can
also call gm_readnext_tuple() under per-tuple context (which will fix the
leak
into tqueue.c) and then store the copy of tuple into tuple array.
Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Oct 17, 2016 at 4:56 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
+ node->nreaders < 2)
...
I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.
single_copy doesn't make sense for GatherMerge, because the whole
point is to merge a bunch of individually-sorted output streams into a
single stream. If you have only one stream of tuples, you don't need
to merge anything: you could have just used Gather for that.
It does, however, make sense to merge one worker's output with the
leader's output.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Thanks Amit for reviewing this patch.
On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:
On Wed, Oct 5, 2016 at 11:35 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:Hi hackers,
Attached is the patch to implement Gather Merge.
Couple of review comments:
1. ExecGatherMerge() { .. + /* No workers? Then never mind. */ + if (!got_any_worker || + node->nreaders < 2) + { + ExecShutdownGatherMergeWorkers(node); + node->nreaders = 0; + }Are you planning to restrict the use of gather merge based on number
of workers, if there is a valid reason, then I think comments should
be updated for same?
Thanks for catching this. This is left over from the earlier design patch.
With
current design we don't have any limitation for the number of worker . I did
the performance testing with setting max_parallel_workers_per_gather to 1
and didn't noticed any performance degradation. So I removed this limitation
into attached patch.
2.
+ExecGatherMerge(GatherMergeState * node){ .. + if (!node->initialized) + { + EState *estate = node->ps.state; + GatherMerge *gm = (GatherMerge *) node->ps.plan; + + /* + * Sometimes we might have to run without parallelism; but if parallel + * mode is active then we can try to fire up some workers. + */ + if (gm->num_workers > 0 && IsInParallelMode()) + { + ParallelContext *pcxt; + bool got_any_worker = false; + + /* Initialize the workers required to execute Gather node. */ + if (!node->pei) + node->pei = ExecInitParallelPlan(node-ps.lefttree,
+
estate,
+
gm->num_workers);
..
}There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.
Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might change
particularly
for the Gather Merge. And as explained by Robert single_copy doesn't make
sense
for the Gather Merge. I will still look into this to see if something can
be make
centralize.
3. +gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force) { .. + tup = gm_readnext_tuple(gm_state, reader, force, NULL); + + /* + * try to read more tuple into nowait mode and store it into the tuple + * array. + */ + if (HeapTupleIsValid(tup)) + fill_tuple_array(gm_state, reader);How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().
Yes, you are right. First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try to
read tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details). Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separate function
that code
look more clear. If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.
4. +create_gather_merge_path(..) { .. + 0 /* FIXME: pathnode->limit_tuples*/);What exactly you want to fix in above code?
Fixed.
5. +/* Tuple array size */ +#define MAX_TUPLE_STORE 10Have you tried with other values of MAX_TUPLE_STORE? If yes, then
what are the results? I think it is better to add a comment why array
size is best for performance.
Actually I was thinking on that, but I don't wanted to add their because
its just
performance number on my machine. Anyway I added the comments.
6. +/* INTERFACE ROUTINES + * ExecInitGatherMerge - initialize the MergeAppend node + * ExecGatherMerge - retrieve the next tuple from the node + * ExecEndGatherMerge - shut down the MergeAppend node + * ExecReScanGatherMerge - rescan the MergeAppend nodetypo. /MergeAppend/GatherMerge
Fixed.
7. +static TupleTableSlot *gather_merge_getnext(GatherMergeState * gm_state); +static HeapTuple gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool force, bool *done);Formatting near GatherMergeState doesn't seem to be appropriate. I
think you need to add GatherMergeState in typedefs.list and then run
pgindent again.
Fixed.
8. + /* + * Initialize funnel slot to same tuple descriptor as outer plan. + */ + if (!ExecContextForcesOids(&gm_state->ps, &hasoid))I think in above comment, you mean Initialize GatherMerge slot.
No, it has to be funnel slot only - its just place holder. For Gather
Merge, initialize
one slot per worker and it is done into gather_merge_init(). I will look
into this point
to see if I can get rid of funnel slot completely.
9.
+ /* Does tuple array have any avaiable tuples? */
/avaiable/available
Fixed.
Open Issue:
- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather mergethat
is
not possible, as we storing the tuple into Tuple array and we want tupleto
be
free only its get pass through the merge sort algorithm. One idea is, wecan
also call gm_readnext_tuple() under per-tuple context (which will fix the
leak
into tqueue.c) and then store the copy of tuple into tuple array.Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?
I don't think was specificially for the record or composite types - but I
might be
wrong. As per my understanding commit fix leak into tqueue.c. Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.
I have idea to fix this by calling the TupleQueueReaderNext() with
per-tuple context,
and then copy the tuple and store it to the tuple array and later with the
next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Rushabh Lathia
Attachments:
gather_merge_v2.patchapplication/x-download; name=gather_merge_v2.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 1247433..cb0299a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..cbd1dd2
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,693 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the GatherMerge node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ * ExecEndGatherMerge - shut down the GatherMerge node
+ * ExecReScanGatherMerge - rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTuple
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTuple;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force);
+static void fill_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ gm_state->funnel_slot = ExecInitExtraTupleSlot(estate);
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * Initialize funnel slot to same tuple descriptor as outer plan.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ ExecSetSlotDescriptor(gm_state->funnel_slot, tupDesc);
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ TupleTableSlot *fslot = node->funnel_slot;
+ int i;
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+
+ /*
+ * Initialize the parallel context and workers on first execution. We do
+ * this on first execution rather than during node initialization, as it
+ * needs to allocate large dynamic segment, so it is better to do if it is
+ * really needed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+ bool got_any_worker = false;
+
+ /* Initialize the workers required to execute Gather node. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /*
+ * Register backend workers. We might not get as many as we
+ * requested, or indeed any at all.
+ */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader =
+ palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ if (pcxt->worker[i].bgwhandle == NULL)
+ continue;
+
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ fslot->tts_tupleDescriptor);
+ node->nreaders++;
+ got_any_worker = true;
+ }
+ }
+
+ /* No workers? Then never mind. */
+ if (!got_any_worker)
+ ExecShutdownGatherMergeWorkers(node);
+ }
+
+ /* always allow leader to participate into gather merge */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull atleast single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ TupleTableSlot *fslot = gm_state->funnel_slot;
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple = (GMReaderTuple *) palloc0(sizeof(GMReaderTuple) * (gm_state->nreaders));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ fslot->tts_tupleDescriptor);
+ }
+
+ /* Allocate the resources for the sort */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+ /*
+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.
+ * After this if all worker unable to produce the tuple, then re-read and
+ * this time read the tuple into wait mode. For the worker, which was able
+ * to produced single tuple in the earlier loop, just fill the tuple array
+ * if more tuples available.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty)
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ fill_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_isempty)
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ TupleTableSlot *fslot = gm_state->funnel_slot;
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, true))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return ExecClearTuple(fslot);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return ExecClearTuple(fslot);
+}
+
+/*
+ * Read the tuple for given reader into nowait mode, and fill the tuple array.
+ */
+static void
+fill_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (gm_tuple->nTuples == gm_tuple->readCounter)
+ gm_tuple->nTuples = gm_tuple->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (gm_tuple->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = gm_tuple->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ gm_tuple->tuple[i] = gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &gm_tuple->done);
+ if (!HeapTupleIsValid(gm_tuple->tuple[i]))
+ break;
+ gm_tuple->nTuples++;
+ }
+}
+
+/*
+ * Function attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * fill the tuple array.
+ *
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)
+{
+ HeapTuple tup = NULL;
+
+ /* We here for leader? */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+ /* Does tuple array have any available tuples? */
+ else if (gm_state->gm_tuple[reader].nTuples >
+ gm_state->gm_tuple[reader].readCounter)
+ {
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+
+ tup = gm_tuple->tuple[gm_tuple->readCounter++];
+ }
+ /* reader exhausted? */
+ else if (gm_state->gm_tuple[reader].done)
+ {
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+ * try to read more tuple into nowait mode and store it into the tuple
+ * array.
+ */
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+ tup = TupleQueueReaderNext(reader, force ? false : true, done);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..5dea0f7 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,18 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+ WRITE_BOOL_FIELD(single_copy);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3363,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3649,6 +3693,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..5dbb83e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,70 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = path->subpath->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. For Gather Merge, require tuple
+ * to be read into wait mode from each worker, so considering some extra
+ * cost for the same.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 47158f6..96bed2e 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,11 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+ int nworkers, bool single_copy,
+ Plan *subplan);
/*
@@ -463,6 +468,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -2246,6 +2255,90 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
+/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ gm_plan = make_gather_merge(subplan->targetlist,
+ NIL,
+ best_path->num_workers,
+ best_path->single_copy,
+ subplan);
+
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ if (pathkeys)
+ {
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /*
+ * Check that we got the same sort key information. We just Assert
+ * that the sortops match, since those depend only on the pathkeys;
+ * but it seems like a good idea to check the sort column numbers
+ * explicitly, to ensure the tlists really do match up.
+ */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ gm_plan->plan.lefttree = subplan;
+ }
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
/*****************************************************************************
*
@@ -5902,6 +5995,26 @@ make_gather(List *qptlist,
return node;
}
+static GatherMerge *
+make_gather_merge(List *qptlist,
+ List *qpqual,
+ int nworkers,
+ bool single_copy,
+ Plan *subplan)
+{
+ GatherMerge *node = makeNode(GatherMerge);
+ Plan *plan = &node->plan;
+
+ /* cost should be inserted by caller */
+ plan->targetlist = qptlist;
+ plan->qual = qpqual;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+ node->num_workers = nworkers;
+
+ return node;
+}
+
/*
* distinctList is a list of SortGroupClauses, identifying the targetlist
* items that should be considered by the SetOp filter. The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 644b8b6..0325c53 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,59 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We generate a Gather path based on the cheapest partial path,
+ * and a GatherMerge path for each partial path that is properly sorted.
*/
if (grouped_rel->partial_pathlist)
{
Path *path = (Path *) linitial(grouped_rel->partial_pathlist);
double total_groups = path->rows * path->parallel_workers;
+ /*
+ * GatherMerge is always sorted, so if there is GROUP BY clause,
+ * try to generate GatherMerge path for each partial path.
+ */
+ if (parse->groupClause)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *gmpath = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+ continue;
+
+ /* create gather merge path */
+ gmpath = (Path *) create_gather_merge_path(root,
+ grouped_rel,
+ gmpath,
+ NULL,
+ root->group_pathkeys,
+ NULL);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
+
path = (Path *) create_gather_path(root,
grouped_rel,
path,
@@ -3870,6 +3915,12 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * Partial pathlist generated for grouped relation are no further useful,
+ * so just reset it with null.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4166,6 +4217,36 @@ create_distinct_paths(PlannerInfo *root,
}
}
+ /*
+ * Generate GatherMerge path for each partial path.
+ */
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+ {
+ path = (Path *) create_sort_path(root, distinct_rel,
+ path,
+ needed_pathkeys,
+ -1.0);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ distinct_rel,
+ path,
+ NULL,
+ needed_pathkeys,
+ NULL);
+ add_path(distinct_rel, (Path *)
+ create_upper_unique_path(root,
+ distinct_rel,
+ path,
+ list_length(root->distinct_pathkeys),
+ numDistinctRows));
+ }
+
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
@@ -4310,6 +4391,39 @@ create_ordered_paths(PlannerInfo *root,
ordered_rel->useridiscurrent = input_rel->useridiscurrent;
ordered_rel->fdwroutine = input_rel->fdwroutine;
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ bool is_sorted;
+
+ is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+ path->pathkeys);
+ if (!is_sorted)
+ {
+ /* An explicit sort here can take advantage of LIMIT */
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ path,
+ root->sort_pathkeys,
+ limit_tuples);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ ordered_rel,
+ path,
+ target,
+ root->sort_pathkeys,
+ NULL);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+
foreach(lc, input_rel->pathlist)
{
Path *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..822fca2 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..f605284 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..3feb3f1 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleTableSlot *funnel_slot;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTuple *gm_tuple; /* array of lenght nreaders + leader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..dfaca79 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. FIXME: comments
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+ bool single_copy; /* path must not be executed >1x */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..cd48cc4 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,8 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,10 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
List **restrict_clauses);
extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel, Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer);
#endif /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergeState
Gene
GenericCosts
GenericExprState
On Tue, Oct 4, 2016 at 11:05 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
Query 4: With GM 7901.480 -> Without GM 9064.776
Query 5: With GM 53452.126 -> Without GM 55059.511
Query 9: With GM 52613.132 -> Without GM 98206.793
Query 15: With GM 68051.058 -> Without GM 68918.378
Query 17: With GM 129236.075 -> Without GM 160451.094
Query 20: With GM 259144.232 -> Without GM 306256.322
Query 21: With GM 153483.497 -> Without GM 168169.916Here from the results we can see that query 9, 17 and 20 are the one which
show good performance benefit with the Gather Merge.
Were all other TPC-H queries unaffected? IOW, did they have the same
plan as before with your patch applied? Did you see any regressions?
I assume that this patch has each worker use work_mem for its own
sort, as with hash joins today. One concern with that model when
testing is that you could end up with a bunch of internal sorts for
cases with a GM node, where you get one big external sort for cases
without one. Did you take that into consideration?
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Oct 18, 2016 at 5:29 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might change
particularly
for the Gather Merge. And as explained by Robert single_copy doesn't make
sense
for the Gather Merge. I will still look into this to see if something can be
make
centralize.
Okay, I haven't thought about it, but do let me know if you couldn't
find any way to merge the code.
3. +gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force) { .. + tup = gm_readnext_tuple(gm_state, reader, force, NULL); + + /* + * try to read more tuple into nowait mode and store it into the tuple + * array. + */ + if (HeapTupleIsValid(tup)) + fill_tuple_array(gm_state, reader);How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().Yes, you are right.
You have not answered my first question. I will try to ask again, how
the tuple read by below code is stored in the array:
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try to read
tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details).
Whenever any worker produced one tuple, you already try to fill the
array in gather_merge_readnext(), so does the above code help much?
Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separate function
that code
look more clear.
To me that looks slightly confusing.
If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.
form_tuple_array (form_tuple is used at many places in code, so it
should look okay). I think you might want to consider forming array
even for leader, although it might not be as beneficial as for
workers, OTOH, the code will get simplified if we do that way.
Today, I observed another issue in code:
+gather_merge_init(GatherMergeState *gm_state)
{
..
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty)
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ fill_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_isempty)
+ goto reread;
..
}
This code can cause infinite loop. Consider a case where one of the
worker doesn't get any tuple because by the time it starts all the
tuples are consumed by all other workers. The above code will keep on
looping to fetch the tuple from that worker whereas that worker can't
return any tuple. I think you can fix it by checking if the
particular queue has been exhausted.
Open Issue:
- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge
that
is
not possible, as we storing the tuple into Tuple array and we want tuple
to
be
free only its get pass through the merge sort algorithm. One idea is, we
can
also call gm_readnext_tuple() under per-tuple context (which will fix
the
leak
into tqueue.c) and then store the copy of tuple into tuple array.Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?I don't think was specificially for the record or composite types - but I
might be
wrong. As per my understanding commit fix leak into tqueue.c.
Hmm, in tqueue.c, I think the memory leak was remapping logic, refer
mail [1]/messages/by-id/32763.1469821037@sss.pgh.pa.us of Tom (Just to add insult to injury, the backend's memory
consumption bloats to something over 5.5G during that last query).
Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.I have idea to fix this by calling the TupleQueueReaderNext() with per-tuple
context,
and then copy the tuple and store it to the tuple array and later with the
next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.
Okay, if you think that is viable, then you can do it, but do check
the performance impact of same, because extra copy per fetched tuple
can impact performance.
[1]: /messages/by-id/32763.1469821037@sss.pgh.pa.us
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Oct 20, 2016 at 12:22 AM, Peter Geoghegan <pg@heroku.com> wrote:
On Tue, Oct 4, 2016 at 11:05 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:Query 4: With GM 7901.480 -> Without GM 9064.776
Query 5: With GM 53452.126 -> Without GM 55059.511
Query 9: With GM 52613.132 -> Without GM 98206.793
Query 15: With GM 68051.058 -> Without GM 68918.378
Query 17: With GM 129236.075 -> Without GM 160451.094
Query 20: With GM 259144.232 -> Without GM 306256.322
Query 21: With GM 153483.497 -> Without GM 168169.916Here from the results we can see that query 9, 17 and 20 are the one
which
show good performance benefit with the Gather Merge.
Were all other TPC-H queries unaffected? IOW, did they have the same
plan as before with your patch applied? Did you see any regressions?
Yes, all other TPC-H queries where unaffected with the patch. At the
initially stage of patch development I noticed the regressions, but then
realize that it is because I am not allowing leader to participate in the
GM. Later on I fixed that and after that I didn't noticed any regressions.
I assume that this patch has each worker use work_mem for its own
sort, as with hash joins today. One concern with that model when
testing is that you could end up with a bunch of internal sorts for
cases with a GM node, where you get one big external sort for cases
without one. Did you take that into consideration?
Yes, but isn't that good? Please correct me if I am missing anything.
--
Peter Geoghegan
--
Rushabh Lathia
www.EnterpriseDB.com
On Thu, Oct 20, 2016 at 1:12 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:
On Tue, Oct 18, 2016 at 5:29 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might change
particularly
for the Gather Merge. And as explained by Robert single_copy doesn't make
sense
for the Gather Merge. I will still look into this to see if somethingcan be
make
centralize.Okay, I haven't thought about it, but do let me know if you couldn't
find any way to merge the code.
Sure, I will look into this.
3. +gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force) { .. + tup = gm_readnext_tuple(gm_state, reader, force, NULL); + + /* + * try to read more tuple into nowait mode and store it into the tuple + * array. + */ + if (HeapTupleIsValid(tup)) + fill_tuple_array(gm_state, reader);How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().Yes, you are right.
You have not answered my first question. I will try to ask again, how
the tuple read by below code is stored in the array:
Tuple directly get stored into related TupleTableSlot.
In gather_merge_readnext()
at the end of function it build the TupleTableSlot for the given tuple. So
tuple
read by above code is directly stored into TupleTableSlot.
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try toread
tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details).Whenever any worker produced one tuple, you already try to fill the
array in gather_merge_readnext(), so does the above code help much?Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separatefunction
that code
look more clear.To me that looks slightly confusing.
If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.form_tuple_array (form_tuple is used at many places in code, so it
should look okay).
Ok, I rename it with next patch.
I think you might want to consider forming array
even for leader, although it might not be as beneficial as for
workers, OTOH, the code will get simplified if we do that way.
Yes, I did that earlier - and as you guessed its not be any beneficial
so to avoided extra memory allocation for the tuple array, I am not
forming array for leader.
Today, I observed another issue in code:
+gather_merge_init(GatherMergeState *gm_state) { .. +reread: + for (i = 0; i < nreaders + 1; i++) + { + if (TupIsNull(gm_state->gm_slots[i]) || + gm_state->gm_slots[i]->tts_isempty) + { + if (gather_merge_readnext(gm_state, i, initialize ? false : true)) + { + binaryheap_add_unordered(gm_state->gm_heap, + Int32GetDatum(i)); + } + } + else + fill_tuple_array(gm_state, i); + } + initialize = false; + + for (i = 0; i < nreaders; i++) + if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_ isempty) + goto reread; .. }This code can cause infinite loop. Consider a case where one of the
worker doesn't get any tuple because by the time it starts all the
tuples are consumed by all other workers. The above code will keep on
looping to fetch the tuple from that worker whereas that worker can't
return any tuple. I think you can fix it by checking if the
particular queue has been exhausted.
Oh yes. I will work on the fix and soon submit the next set of patch.
Open Issue:
- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge
that
is
not possible, as we storing the tuple into Tuple array and we wanttuple
to
be
free only its get pass through the merge sort algorithm. One idea is,we
can
also call gm_readnext_tuple() under per-tuple context (which will fix
the
leak
into tqueue.c) and then store the copy of tuple into tuple array.Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?I don't think was specificially for the record or composite types - but I
might be
wrong. As per my understanding commit fix leak into tqueue.c.Hmm, in tqueue.c, I think the memory leak was remapping logic, refer
mail [1] of Tom (Just to add insult to injury, the backend's memory
consumption bloats to something over 5.5G during that last query).Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.I have idea to fix this by calling the TupleQueueReaderNext() with
per-tuple
context,
and then copy the tuple and store it to the tuple array and later withthe
next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.Okay, if you think that is viable, then you can do it, but do check
the performance impact of same, because extra copy per fetched tuple
can impact performance.
Sure, I will check the performance impact for the same.
[1] - /messages/by-id/32763.1469821037%25
40sss.pgh.pa.us--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Thanks,
Rushabh Lathia
www.EnterpriseDB.com
Please find attached latest patch which fix the review point as well as
additional clean-up.
- Get rid of funnel_slot as its not needed for the Gather Merge
- renamed fill_tuple_array to form_tuple_array
- Fix possible infinite loop into gather_merge_init (Reported by Amit
Kaplia)
- Fix tqueue.c memory leak, by calling TupleQueueReaderNext() with
per-tuple context.
- Code cleanup into ExecGatherMerge.
- Added new function gather_merge_clear_slots(), which clear out all gather
merge slots and also free tuple array at end of execution.
I did the performance testing again with the latest patch and I haven't
observed any regression. Some of TPC-H queries showing additional benefit
with
the latest patch, but its just under 5%.
Do let me know if I missed anything.
On Mon, Oct 24, 2016 at 11:55 AM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
On Thu, Oct 20, 2016 at 1:12 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:On Tue, Oct 18, 2016 at 5:29 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it mightchange
particularly
for the Gather Merge. And as explained by Robert single_copy doesn'tmake
sense
for the Gather Merge. I will still look into this to see if somethingcan be
make
centralize.Okay, I haven't thought about it, but do let me know if you couldn't
find any way to merge the code.Sure, I will look into this.
3. +gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force) { .. + tup = gm_readnext_tuple(gm_state, reader, force, NULL); + + /* + * try to read more tuple into nowait mode and store it into the tuple + * array. + */ + if (HeapTupleIsValid(tup)) + fill_tuple_array(gm_state, reader);How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().Yes, you are right.
You have not answered my first question. I will try to ask again, how
the tuple read by below code is stored in the array:Tuple directly get stored into related TupleTableSlot.
In gather_merge_readnext()
at the end of function it build the TupleTableSlot for the given tuple. So
tuple
read by above code is directly stored into TupleTableSlot.+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try toread
tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details).Whenever any worker produced one tuple, you already try to fill the
array in gather_merge_readnext(), so does the above code help much?Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separatefunction
that code
look more clear.To me that looks slightly confusing.
If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.form_tuple_array (form_tuple is used at many places in code, so it
should look okay).Ok, I rename it with next patch.
I think you might want to consider forming array
even for leader, although it might not be as beneficial as for
workers, OTOH, the code will get simplified if we do that way.Yes, I did that earlier - and as you guessed its not be any beneficial
so to avoided extra memory allocation for the tuple array, I am not
forming array for leader.Today, I observed another issue in code:
+gather_merge_init(GatherMergeState *gm_state) { .. +reread: + for (i = 0; i < nreaders + 1; i++) + { + if (TupIsNull(gm_state->gm_slots[i]) || + gm_state->gm_slots[i]->tts_isempty) + { + if (gather_merge_readnext(gm_state, i, initialize ? false : true)) + { + binaryheap_add_unordered(gm_state->gm_heap, + Int32GetDatum(i)); + } + } + else + fill_tuple_array(gm_state, i); + } + initialize = false; + + for (i = 0; i < nreaders; i++) + if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_ise mpty) + goto reread; .. }This code can cause infinite loop. Consider a case where one of the
worker doesn't get any tuple because by the time it starts all the
tuples are consumed by all other workers. The above code will keep on
looping to fetch the tuple from that worker whereas that worker can't
return any tuple. I think you can fix it by checking if the
particular queue has been exhausted.Oh yes. I will work on the fix and soon submit the next set of patch.
Open Issue:
- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak
into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gathermerge
that
is
not possible, as we storing the tuple into Tuple array and we wanttuple
to
be
free only its get pass through the merge sort algorithm. One ideais, we
can
also call gm_readnext_tuple() under per-tuple context (which will fix
the
leak
into tqueue.c) and then store the copy of tuple into tuple array.Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?I don't think was specificially for the record or composite types - but
I
might be
wrong. As per my understanding commit fix leak into tqueue.c.Hmm, in tqueue.c, I think the memory leak was remapping logic, refer
mail [1] of Tom (Just to add insult to injury, the backend's memory
consumption bloats to something over 5.5G during that last query).Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.I have idea to fix this by calling the TupleQueueReaderNext() with
per-tuple
context,
and then copy the tuple and store it to the tuple array and later withthe
next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.Okay, if you think that is viable, then you can do it, but do check
the performance impact of same, because extra copy per fetched tuple
can impact performance.Sure, I will check the performance impact for the same.
[1] - /messages/by-id/32763.1469821037@sss
.pgh.pa.us--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.comThanks,
Rushabh Lathia
www.EnterpriseDB.com
--
Rushabh Lathia
Attachments:
gather_merge_v3.patchapplication/x-download; name=gather_merge_v3.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..0f08649
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,721 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the GatherMerge node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ * ExecEndGatherMerge - shut down the GatherMerge node
+ * ExecReScanGatherMerge - rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTuple
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTuple;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initilizing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ int i;
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+
+ /*
+ * Initialize the parallel context and workers on first execution. We do
+ * this on first execution rather than during node initialization, as it
+ * needs to allocate large dynamic segment, so it is better to do if it is
+ * really needed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize the workers required to execute Gather node. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /*
+ * Register backend workers. We might not get as many as we
+ * requested, or indeed any at all.
+ */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader =
+ palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate into gather merge */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull atleast single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple = (GMReaderTuple *) palloc0(sizeof(GMReaderTuple) * (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the sort */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+ /*
+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.
+ * After this if all active worker unable to produce the tuple, then
+ * re-read and this time read the tuple into wait mode. For the worker,
+ * which was able to produced single tuple in the earlier loop and still
+ * active, just try fill the tuple array if more tuples available.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Function clear out a slot in the tuple table for each gather merge
+ * slots and returns the clear clear slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we no more need it */
+ pfree(gm_state->gm_tuple);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, true))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (gm_tuple->nTuples == gm_tuple->readCounter)
+ gm_tuple->nTuples = gm_tuple->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (gm_tuple->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = gm_tuple->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ gm_tuple->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &gm_tuple->done));
+ if (!HeapTupleIsValid(gm_tuple->tuple[i]))
+ break;
+ gm_tuple->nTuples++;
+ }
+}
+
+/*
+ * Function attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)
+{
+ HeapTuple tup = NULL;
+
+ /* We here for leader? */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+ /* Does tuple array have any available tuples? */
+ else if (gm_state->gm_tuple[reader].nTuples >
+ gm_state->gm_tuple[reader].readCounter)
+ {
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+
+ tup = gm_tuple->tuple[gm_tuple->readCounter++];
+ }
+ /* reader exhausted? */
+ else if (gm_state->gm_tuple[reader].done)
+ {
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ force,
+ &gm_state->gm_tuple[reader].done));
+
+ /*
+ * try to read more tuple into nowait mode and store it into the tuple
+ * array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, force ? false : true, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..5dea0f7 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,18 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+ WRITE_BOOL_FIELD(single_copy);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3363,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3649,6 +3693,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..5dbb83e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,70 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = path->subpath->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. For Gather Merge, require tuple
+ * to be read into wait mode from each worker, so considering some extra
+ * cost for the same.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..d4fea89 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,11 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+ int nworkers, bool single_copy,
+ Plan *subplan);
/*
@@ -463,6 +468,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -2246,6 +2255,90 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
+/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ gm_plan = make_gather_merge(subplan->targetlist,
+ NIL,
+ best_path->num_workers,
+ best_path->single_copy,
+ subplan);
+
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ if (pathkeys)
+ {
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /*
+ * Check that we got the same sort key information. We just Assert
+ * that the sortops match, since those depend only on the pathkeys;
+ * but it seems like a good idea to check the sort column numbers
+ * explicitly, to ensure the tlists really do match up.
+ */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ gm_plan->plan.lefttree = subplan;
+ }
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
/*****************************************************************************
*
@@ -5909,6 +6002,26 @@ make_gather(List *qptlist,
return node;
}
+static GatherMerge *
+make_gather_merge(List *qptlist,
+ List *qpqual,
+ int nworkers,
+ bool single_copy,
+ Plan *subplan)
+{
+ GatherMerge *node = makeNode(GatherMerge);
+ Plan *plan = &node->plan;
+
+ /* cost should be inserted by caller */
+ plan->targetlist = qptlist;
+ plan->qual = qpqual;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+ node->num_workers = nworkers;
+
+ return node;
+}
+
/*
* distinctList is a list of SortGroupClauses, identifying the targetlist
* items that should be considered by the SetOp filter. The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 644b8b6..0325c53 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,59 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We generate a Gather path based on the cheapest partial path,
+ * and a GatherMerge path for each partial path that is properly sorted.
*/
if (grouped_rel->partial_pathlist)
{
Path *path = (Path *) linitial(grouped_rel->partial_pathlist);
double total_groups = path->rows * path->parallel_workers;
+ /*
+ * GatherMerge is always sorted, so if there is GROUP BY clause,
+ * try to generate GatherMerge path for each partial path.
+ */
+ if (parse->groupClause)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *gmpath = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+ continue;
+
+ /* create gather merge path */
+ gmpath = (Path *) create_gather_merge_path(root,
+ grouped_rel,
+ gmpath,
+ NULL,
+ root->group_pathkeys,
+ NULL);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
+
path = (Path *) create_gather_path(root,
grouped_rel,
path,
@@ -3870,6 +3915,12 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * Partial pathlist generated for grouped relation are no further useful,
+ * so just reset it with null.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4166,6 +4217,36 @@ create_distinct_paths(PlannerInfo *root,
}
}
+ /*
+ * Generate GatherMerge path for each partial path.
+ */
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+ {
+ path = (Path *) create_sort_path(root, distinct_rel,
+ path,
+ needed_pathkeys,
+ -1.0);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ distinct_rel,
+ path,
+ NULL,
+ needed_pathkeys,
+ NULL);
+ add_path(distinct_rel, (Path *)
+ create_upper_unique_path(root,
+ distinct_rel,
+ path,
+ list_length(root->distinct_pathkeys),
+ numDistinctRows));
+ }
+
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
@@ -4310,6 +4391,39 @@ create_ordered_paths(PlannerInfo *root,
ordered_rel->useridiscurrent = input_rel->useridiscurrent;
ordered_rel->fdwroutine = input_rel->fdwroutine;
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ bool is_sorted;
+
+ is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+ path->pathkeys);
+ if (!is_sorted)
+ {
+ /* An explicit sort here can take advantage of LIMIT */
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ path,
+ root->sort_pathkeys,
+ limit_tuples);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ ordered_rel,
+ path,
+ target,
+ root->sort_pathkeys,
+ NULL);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+
foreach(lc, input_rel->pathlist)
{
Path *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..822fca2 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..f605284 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..279f468 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTuple *gm_tuple; /* array of lenght nreaders + leader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..dfaca79 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. FIXME: comments
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+ bool single_copy; /* path must not be executed >1x */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..cd48cc4 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,8 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,10 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
List **restrict_clauses);
extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel, Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer);
#endif /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergeState
Gene
GenericCosts
GenericExprState
On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
Please find attached latest patch which fix the review point as well as
additional clean-up.
I've signed up to review this patch and I'm planning to do some
testing. Here's some initial feedback after a quick read-through:
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
Clunky ternary operator... how about "!initialize".
+/*
+ * Function clear out a slot in the tuple table for each gather merge
+ * slots and returns the clear clear slot.
+ */
Maybe better like this? "_Clear_ out a slot in the tuple table for
each gather merge _slot_ and _return_ the _cleared_ slot."
+ /* Free tuple array as we no more need it */
"... as we don't need it any more"
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
"_Fetch_ the sorted tuple out of the heap."
+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
s/participate/participant/
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
Not correctly sorted.
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initilizing the gather merge slots.
+ */
s/initilizing/initializing/
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
The convention in Postgres code seems to be to use a form like "Free
any storage ..." in function documentation. Not sure if that's an
imperative, an infinitive, or if the word "we" is omitted since
English is so fuzzy like that, but it's inconsistent with other
documentation to use "frees" here. Oh, I see that exact wording is in
several other files. I guess I'll just leave this as a complaint
about all those files then :-)
+ * Pull atleast single tuple from each worker + leader and set up the heap.
s/atleast single/at least a single/
+ * Read the tuple for given reader into nowait mode, and form the tuple array.
s/ into / in /
+ * Function attempt to read tuple for the given reader and store it into reader
s/Function attempt /Attempt /
+ * Function returns true if found tuple for the reader, otherwise returns
s/Function returns /Return /
+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.
I wonder if it would be good to standardise on the terminology we use
when we mean workers AND the leader. In my Parallel Shared Hash work,
I've been saying 'participants' if I mean = workers + leader. What do
you think?
+ * After this if all active worker unable to produce the tuple, then
+ * re-read and this time read the tuple into wait mode. For the worker,
+ * which was able to produced single tuple in the earlier loop and still
+ * active, just try fill the tuple array if more tuples available.
+ */
How about this? "After this, if all active workers are unable to
produce a tuple, then re-read and this time us wait mode. For workers
that were able to produce a tuple in the earlier loop and are still
active, just try to fill the tuple array if more tuples are
available."
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
s/much simple then/much simpler than/
+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
...
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;
Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?
Also, shouldn't we add 1 to N to account for the leader? Suppose
there are 2 workers. There are 3 elements in the binary heap. The
element to be sifted down must be compared against either 1 or 2
others to reorganise the heap. Surely in that case we should estimate
log2(3) = ~1.58 comparisons, not log2(2) = 1 comparison.
I suspect that the leader's contribution will be equivalent to a whole
worker if the plan involves a sort: as soon as the leader pulls a
tuple in gather_merge_init, the sort node will pull all the tuples it
can in a tight loop. It's unfortunate that cost_seqscan has to
estimate what the leader's contribution will be without knowing
whether it has a "greedy" high-startup-cost consumer like a sort or
hash node where the leader will contribute a whole backend's full
attention as soon as it executes the plan, or a lazy consumer where
the leader will probably not contribute much if there are enough
workers to keep it distracted. In the case of a Gather Merge -> Sort
-> Parallel Seq Scan plan, I think we will overestimate the number of
rows (per participant), because cost_seqscan will guess that the
leader is spending 30% of its time per worker servicing the workers,
when in fact it will be sucking tuples into a sort node just as fast
as anyone else. But I don't see what this patch can do about that...
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)
s/into wait mode/in wait mode/
This appears throughout the comments; not sure if I can explain this
well but "in wait mode" describes a state of being which is wanted
here, "into wait mode" describes some kind of change or movement or
insertion.
Perhaps it would be better to say "reads the tuple _queue_ in wait
mode", just to make clearer that this is talking about the wait/nowait
feature of tuple queues, and perhaps also note that the leader always
waits since it executes the plan.
Maybe we should use "bool nowait" here anway, mirroring the TupleQueue
interface? Why introduce another terminology for the same thing with
inverted sense?
+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
This function is stangely named. How about try_to_fill_tuple_buffer
or something?
+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
I wonder if the purpose of gm_tuple, would be clearer if it were
called gm_tuple_buffers. Plural because it holds one buffer per
reader. Then in that variable on the left hand side there could be
called tuple_buffer (singular), because it's the buffer of tuples for
one single reader.
--
Thomas Munro
http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Nov 4, 2016 at 12:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of CaliforniaShouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?
If the new file contains a portion of code from this age, yes. If
that's something completely new using only PGDG is fine. At least
that's what I can conclude by looking at git log -p and search for
"new file mode".
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Nov 3, 2016 at 11:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
+ /* + * Avoid log(0)... + */ + N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers; + logN = LOG2(N); ... + /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?
See cost_merge_append, and the header comments threreto.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael Paquier <michael.paquier@gmail.com> writes:
On Fri, Nov 4, 2016 at 12:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?
If the new file contains a portion of code from this age, yes.
My habit has been to include the whole old copyright if there's anything
at all in the new file that could be considered to be copy-and-paste from
an existing file. Frequently it's a gray area.
Legally, I doubt anyone cares much. Morally, I see it as paying due
respect to those who came before us in this project.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Nov 5, 2016 at 1:55 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Nov 3, 2016 at 11:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:+ /* + * Avoid log(0)... + */ + N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers; + logN = LOG2(N); ... + /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?See cost_merge_append, and the header comments threreto.
I see. So commit 7a2fe9bd got rid of the delete/insert code
(heap_siftup_slot and heap_insert_slot) and introduced
binaryheap_replace_first which does it in one step, but the costing
wasn't adjusted and still thinks we pay comparison_cost * logN twice.
--
Thomas Munro
http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Nov 5, 2016 at 2:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michael Paquier <michael.paquier@gmail.com> writes:
On Fri, Nov 4, 2016 at 12:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?If the new file contains a portion of code from this age, yes.
My habit has been to include the whole old copyright if there's anything
at all in the new file that could be considered to be copy-and-paste from
an existing file. Frequently it's a gray area.
Thanks. I see that it's warranted in this case, as code is recycled
from MergeAppend.
Legally, I doubt anyone cares much. Morally, I see it as paying due
respect to those who came before us in this project.
+1
--
Thomas Munro
http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:Please find attached latest patch which fix the review point as well as
additional clean-up.+/* + * Read the tuple for given reader into nowait mode, and form the tuple array. + */ +static void +form_tuple_array(GatherMergeState *gm_state, int reader)This function is stangely named. How about try_to_fill_tuple_buffer
or something?
Hmm. We have discussed upthread to name it as form_tuple_array. Now,
you feel that is also not good, I think it is basically matter of
perspective, so why not leave it as it is for now and we will come
back to naming it towards end of patch review or may be we can leave
it for committer to decide.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <thomas.munro@enterprisedb.com>
wrote:
On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:Please find attached latest patch which fix the review point as well as
additional clean-up.I've signed up to review this patch and I'm planning to do some
testing. Here's some initial feedback after a quick read-through:
Thanks Thomas.
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
Clunky ternary operator... how about "!initialize".
Fixed.
+/* + * Function clear out a slot in the tuple table for each gather merge + * slots and returns the clear clear slot. + */Maybe better like this? "_Clear_ out a slot in the tuple table for
each gather merge _slot_ and _return_ the _cleared_ slot."
Fixed.
+ /* Free tuple array as we no more need it */
"... as we don't need it any more"
Fixed
+/* + * Read the next tuple for gather merge. + * + * Function fetch the sorted tuple out of the heap. + */"_Fetch_ the sorted tuple out of the heap."
Fixed
+ * Otherwise, pull the next tuple from whichever participate we + * returned from last time, and reinsert the index into the heap, + * because it might now compare differently against the existings/participate/participant/
Fixed.
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of CaliforniaShouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?
Fixed.
Are we supposed to be blaming the University of California
for new files?
Not quite sure about this, so keeping this as it is.
+#include "executor/tqueue.h" +#include "miscadmin.h" +#include "utils/memutils.h" +#include "utils/rel.h" +#include "lib/binaryheap.h"Not correctly sorted.
Copied from nodeGather.c. But Fixed here.
+ /* + * store the tuple descriptor into gather merge state, so we can use it + * later while initilizing the gather merge slots. + */s/initilizing/initializing/
Fixed.
+/* ---------------------------------------------------------------- + * ExecEndGatherMerge + * + * frees any storage allocated through C routines. + * ----------------------------------------------------------------The convention in Postgres code seems to be to use a form like "Free
any storage ..." in function documentation. Not sure if that's an
imperative, an infinitive, or if the word "we" is omitted since
English is so fuzzy like that, but it's inconsistent with other
documentation to use "frees" here. Oh, I see that exact wording is in
several other files. I guess I'll just leave this as a complaint
about all those files then :-)
Sure.
+ * Pull atleast single tuple from each worker + leader and set up the
heap.s/atleast single/at least a single/
Fixed.
+ * Read the tuple for given reader into nowait mode, and form the tuple
array.s/ into / in /
Fixed.
+ * Function attempt to read tuple for the given reader and store it into
readers/Function attempt /Attempt /
Fixed.
+ * Function returns true if found tuple for the reader, otherwise returns
s/Function returns /Return /
Fixed.
+ * First try to read tuple for each worker (including leader) into nowait + * mode, so that we initialize read from each worker as well as leader.I wonder if it would be good to standardise on the terminology we use
when we mean workers AND the leader. In my Parallel Shared Hash work,
I've been saying 'participants' if I mean = workers + leader. What do
you think?
I am not quite sure about participants. In my options when we explicitly
say workers + leader its more clear. I am open to change is if committer
thinks otherwise.
+ * After this if all active worker unable to produce the tuple, then + * re-read and this time read the tuple into wait mode. For the worker, + * which was able to produced single tuple in the earlier loop and still + * active, just try fill the tuple array if more tuples available. + */How about this? "After this, if all active workers are unable to
produce a tuple, then re-read and this time us wait mode. For workers
that were able to produce a tuple in the earlier loop and are still
active, just try to fill the tuple array if more tuples are
available."
Fixed.
+ * The heap is never spilled to disk, since we assume N is not very large. So + * this is much simple then cost_sort.s/much simple then/much simpler than/
Fixed.
+ /* + * Avoid log(0)... + */ + N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers; + logN = LOG2(N); ... + /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?See cost_merge_append.
Also, shouldn't we add 1 to N to account for the leader? Suppose
there are 2 workers. There are 3 elements in the binary heap. The
element to be sifted down must be compared against either 1 or 2
others to reorganise the heap. Surely in that case we should estimate
log2(3) = ~1.58 comparisons, not log2(2) = 1 comparison.
Yes, good catch. For Gather Merge leader always participate, so
we should num_workers + 1.
I suspect that the leader's contribution will be equivalent to a whole
worker if the plan involves a sort: as soon as the leader pulls a
tuple in gather_merge_init, the sort node will pull all the tuples it
can in a tight loop. It's unfortunate that cost_seqscan has to
estimate what the leader's contribution will be without knowing
whether it has a "greedy" high-startup-cost consumer like a sort or
hash node where the leader will contribute a whole backend's full
attention as soon as it executes the plan, or a lazy consumer where
the leader will probably not contribute much if there are enough
workers to keep it distracted. In the case of a Gather Merge -> Sort
-> Parallel Seq Scan plan, I think we will overestimate the number of
rows (per participant), because cost_seqscan will guess that the
leader is spending 30% of its time per worker servicing the workers,
when in fact it will be sucking tuples into a sort node just as fast
as anyone else. But I don't see what this patch can do about that...
Exactly. There is very thin line - when it comes to calculating the cost.
In general, while calculating the cost for GM, I just tried to be similar
to the Gather + MergeAppend.
+ * When force is true, function reads the tuple into wait mode. For gather + * merge we need to fill the slot from which we returned the earlier tuple, so + * this require tuple to be read into wait mode. During initialization phase, + * once we try to read the tuple into no-wait mode as we want to initialize all + * the readers. Refer gather_merge_init() for more details. + * + * Function returns true if found tuple for the reader, otherwise returns + * false. + */ +static bool +gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)s/into wait mode/in wait mode/
This appears throughout the comments; not sure if I can explain this
well but "in wait mode" describes a state of being which is wanted
here, "into wait mode" describes some kind of change or movement or
insertion.Perhaps it would be better to say "reads the tuple _queue_ in wait
mode", just to make clearer that this is talking about the wait/nowait
feature of tuple queues, and perhaps also note that the leader always
waits since it executes the plan.
Fixed. Just choose to s/into wait mode/in wait mode/
Maybe we should use "bool nowait" here anway, mirroring the TupleQueue
interface? Why introduce another terminology for the same thing with
inverted sense?
Agree with you. Changed the function gm_readnext_tuple() &
gather_merge_readnext()
APIs.
+/* + * Read the tuple for given reader into nowait mode, and form the tuple array. + */ +static void +form_tuple_array(GatherMergeState *gm_state, int reader)This function is stangely named. How about try_to_fill_tuple_buffer
or something?+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
I wonder if the purpose of gm_tuple, would be clearer if it were
called gm_tuple_buffers. Plural because it holds one buffer per
reader. Then in that variable on the left hand side there could be
called tuple_buffer (singular), because it's the buffer of tuples for
one single reader.
Yes, you are right. I renamed the variable as well as structure.
PFA latest patch which address the review comments as
well as few other clean ups.
Apart from this my colleague Rafia Sabih reported one regression with
GM. Which was like, if we set work_mem enough to accommodate the
sort operation - in such case GM path get select even though Sort
performs much better.
Example:
create table t (i int);
insert into t values(generate_series(1,10000000));
set work_mem =1024000;
explain analyze select * from t order by i;
set enable_gathermerge =off;
explain analyze select * from t order by i;
postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=335916.26..648415.76 rows=2499996 width=4)
(actual time=2234.145..7628.555 rows=10000000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=334916.22..341166.21 rows=2499996 width=4) (actual
time=2226.609..2611.041 rows=2000000 loops=5)
Sort Key: i
Sort Method: quicksort Memory: 147669kB
-> Parallel Seq Scan on t (cost=0.00..69247.96 rows=2499996
width=4) (actual time=0.034..323.129 rows=2000000 loops=5)
Planning time: 0.061 ms
Execution time: 8143.809 ms
(9 rows)
postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Sort (cost=1306920.83..1331920.79 rows=9999985 width=4) (actual
time=3521.143..4854.148 rows=10000000 loops=1)
Sort Key: i
Sort Method: quicksort Memory: 854075kB
-> Seq Scan on t (cost=0.00..144247.85 rows=9999985 width=4)
(actual time=0.113..1340.758 rows=10000000 loops=1)
Planning time: 0.100 ms
Execution time: 5535.560 ms
(6 rows)
Looking at the plan I realize that this is happening because wrong costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.
Thanks,
Rushabh Lathia
www.EnterpriseDB.com
Oops forgot to attach latest patch in the earlier mail.
On Fri, Nov 11, 2016 at 6:26 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <
thomas.munro@enterprisedb.com> wrote:On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:Please find attached latest patch which fix the review point as well as
additional clean-up.I've signed up to review this patch and I'm planning to do some
testing. Here's some initial feedback after a quick read-through:Thanks Thomas.
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
Clunky ternary operator... how about "!initialize".
Fixed.
+/* + * Function clear out a slot in the tuple table for each gather merge + * slots and returns the clear clear slot. + */Maybe better like this? "_Clear_ out a slot in the tuple table for
each gather merge _slot_ and _return_ the _cleared_ slot."Fixed.
+ /* Free tuple array as we no more need it */
"... as we don't need it any more"
Fixed
+/* + * Read the next tuple for gather merge. + * + * Function fetch the sorted tuple out of the heap. + */"_Fetch_ the sorted tuple out of the heap."
Fixed
+ * Otherwise, pull the next tuple from whichever participate we + * returned from last time, and reinsert the index into the heap, + * because it might now compare differently against the existings/participate/participant/
Fixed.
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of CaliforniaShouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?Fixed.
Are we supposed to be blaming the University of California
for new files?Not quite sure about this, so keeping this as it is.
+#include "executor/tqueue.h" +#include "miscadmin.h" +#include "utils/memutils.h" +#include "utils/rel.h" +#include "lib/binaryheap.h"Not correctly sorted.
Copied from nodeGather.c. But Fixed here.
+ /* + * store the tuple descriptor into gather merge state, so we can use it + * later while initilizing the gather merge slots. + */s/initilizing/initializing/
Fixed.
+/* ---------------------------------------------------------------- + * ExecEndGatherMerge + * + * frees any storage allocated through C routines. + * ----------------------------------------------------------------The convention in Postgres code seems to be to use a form like "Free
any storage ..." in function documentation. Not sure if that's an
imperative, an infinitive, or if the word "we" is omitted since
English is so fuzzy like that, but it's inconsistent with other
documentation to use "frees" here. Oh, I see that exact wording is in
several other files. I guess I'll just leave this as a complaint
about all those files then :-)Sure.
+ * Pull atleast single tuple from each worker + leader and set up the
heap.s/atleast single/at least a single/
Fixed.
+ * Read the tuple for given reader into nowait mode, and form the tuple
array.s/ into / in /
Fixed.
+ * Function attempt to read tuple for the given reader and store it into
readers/Function attempt /Attempt /
Fixed.
+ * Function returns true if found tuple for the reader, otherwise returns
s/Function returns /Return /
Fixed.
+ * First try to read tuple for each worker (including leader) into nowait + * mode, so that we initialize read from each worker as well as leader.I wonder if it would be good to standardise on the terminology we use
when we mean workers AND the leader. In my Parallel Shared Hash work,
I've been saying 'participants' if I mean = workers + leader. What do
you think?I am not quite sure about participants. In my options when we explicitly
say workers + leader its more clear. I am open to change is if committer
thinks otherwise.+ * After this if all active worker unable to produce the tuple, then + * re-read and this time read the tuple into wait mode. For the worker, + * which was able to produced single tuple in the earlier loop and still + * active, just try fill the tuple array if more tuples available. + */How about this? "After this, if all active workers are unable to
produce a tuple, then re-read and this time us wait mode. For workers
that were able to produce a tuple in the earlier loop and are still
active, just try to fill the tuple array if more tuples are
available."Fixed.
+ * The heap is never spilled to disk, since we assume N is not very large. So + * this is much simple then cost_sort.s/much simple then/much simpler than/
Fixed.
+ /* + * Avoid log(0)... + */ + N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers; + logN = LOG2(N); ... + /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?See cost_merge_append.
Also, shouldn't we add 1 to N to account for the leader? Suppose
there are 2 workers. There are 3 elements in the binary heap. The
element to be sifted down must be compared against either 1 or 2
others to reorganise the heap. Surely in that case we should estimate
log2(3) = ~1.58 comparisons, not log2(2) = 1 comparison.Yes, good catch. For Gather Merge leader always participate, so
we should num_workers + 1.I suspect that the leader's contribution will be equivalent to a whole
worker if the plan involves a sort: as soon as the leader pulls a
tuple in gather_merge_init, the sort node will pull all the tuples it
can in a tight loop. It's unfortunate that cost_seqscan has to
estimate what the leader's contribution will be without knowing
whether it has a "greedy" high-startup-cost consumer like a sort or
hash node where the leader will contribute a whole backend's full
attention as soon as it executes the plan, or a lazy consumer where
the leader will probably not contribute much if there are enough
workers to keep it distracted. In the case of a Gather Merge -> Sort
-> Parallel Seq Scan plan, I think we will overestimate the number of
rows (per participant), because cost_seqscan will guess that the
leader is spending 30% of its time per worker servicing the workers,
when in fact it will be sucking tuples into a sort node just as fast
as anyone else. But I don't see what this patch can do about that...Exactly. There is very thin line - when it comes to calculating the cost.
In general, while calculating the cost for GM, I just tried to be similar
to the Gather + MergeAppend.+ * When force is true, function reads the tuple into wait mode. For gather + * merge we need to fill the slot from which we returned the earlier tuple, so + * this require tuple to be read into wait mode. During initialization phase, + * once we try to read the tuple into no-wait mode as we want to initialize all + * the readers. Refer gather_merge_init() for more details. + * + * Function returns true if found tuple for the reader, otherwise returns + * false. + */ +static bool +gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)s/into wait mode/in wait mode/
This appears throughout the comments; not sure if I can explain this
well but "in wait mode" describes a state of being which is wanted
here, "into wait mode" describes some kind of change or movement or
insertion.Perhaps it would be better to say "reads the tuple _queue_ in wait
mode", just to make clearer that this is talking about the wait/nowait
feature of tuple queues, and perhaps also note that the leader always
waits since it executes the plan.Fixed. Just choose to s/into wait mode/in wait mode/
Maybe we should use "bool nowait" here anway, mirroring the TupleQueue
interface? Why introduce another terminology for the same thing with
inverted sense?Agree with you. Changed the function gm_readnext_tuple() &
gather_merge_readnext()
APIs.+/* + * Read the tuple for given reader into nowait mode, and form the tuple array. + */ +static void +form_tuple_array(GatherMergeState *gm_state, int reader)This function is stangely named. How about try_to_fill_tuple_buffer
or something?+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
I wonder if the purpose of gm_tuple, would be clearer if it were
called gm_tuple_buffers. Plural because it holds one buffer per
reader. Then in that variable on the left hand side there could be
called tuple_buffer (singular), because it's the buffer of tuples for
one single reader.Yes, you are right. I renamed the variable as well as structure.
PFA latest patch which address the review comments as
well as few other clean ups.Apart from this my colleague Rafia Sabih reported one regression with
GM. Which was like, if we set work_mem enough to accommodate the
sort operation - in such case GM path get select even though Sort
performs much better.Example:
create table t (i int);
insert into t values(generate_series(1,10000000));
set work_mem =1024000;
explain analyze select * from t order by i;
set enable_gathermerge =off;
explain analyze select * from t order by i;postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=335916.26..648415.76 rows=2499996 width=4) (actual time=2234.145..7628.555 rows=10000000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=334916.22..341166.21 rows=2499996 width=4) (actual time=2226.609..2611.041 rows=2000000 loops=5)
Sort Key: i
Sort Method: quicksort Memory: 147669kB
-> Parallel Seq Scan on t (cost=0.00..69247.96 rows=2499996 width=4) (actual time=0.034..323.129 rows=2000000 loops=5)
Planning time: 0.061 ms
Execution time: 8143.809 ms
(9 rows)postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Sort (cost=1306920.83..1331920.79 rows=9999985 width=4) (actual time=3521.143..4854.148 rows=10000000 loops=1)
Sort Key: i
Sort Method: quicksort Memory: 854075kB
-> Seq Scan on t (cost=0.00..144247.85 rows=9999985 width=4) (actual time=0.113..1340.758 rows=10000000 loops=1)
Planning time: 0.100 ms
Execution time: 5535.560 ms
(6 rows)Looking at the plan I realize that this is happening because wrong costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.Thanks,
Rushabh Lathia
www.EnterpriseDB.com
--
Rushabh Lathia
Attachments:
gather_merge_v4.patchapplication/x-download; name=gather_merge_v4.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..4b6410b
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,723 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the GatherMerge node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ * ExecEndGatherMerge - shut down the GatherMerge node
+ * ExecReScanGatherMerge - rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTupleBuffer;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initializing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ int i;
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+
+ /*
+ * Initialize the parallel context and workers on first execution. We do
+ * this on first execution rather than during node initialization, as it
+ * needs to allocate large dynamic segment, so it is better to do if it is
+ * really needed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize the workers required to execute Gather node. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /*
+ * Register backend workers. We might not get as many as we
+ * requested, or indeed any at all.
+ */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader =
+ palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate into gather merge */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple_buffers =
+ (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple_buffers[i].tuple =
+ (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the sort */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+ /*
+ * First try to read tuple for each worker (including leader) in nowait
+ * mode, so that we initialize read from each worker as well as leader.
+ * After this, if all active workers are unable to produce a tuple, then
+ * re-read and this time use wait mode. For workers that were able to
+ * produce a tuple in the earlier loop and are still active, just try to
+ * fill the tuple array if more tuples are avaiable.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple_buffers[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we don't need it any more */
+ pfree(gm_state->gm_tuple_buffers);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participant we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, false))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+ tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &tuple_buffer->done));
+ if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+ break;
+ tuple_buffer->nTuples++;
+ }
+}
+
+/*
+ * Attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * For gather merge we need to fill the slot from which we returned the earlier
+ * tuple, so this require tuple to be read in wait mode. During initialization
+ * phase, once we try to read the tuple in no-wait mode as we want to
+ * initialize all the readers. Refer gather_merge_init() for more details.
+ *
+ * Return true if found tuple for the reader, otherwise returns false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+ HeapTuple tup = NULL;
+
+ /* We here for leader? */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple_buffers[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+ /* Does tuple array have any available tuples? */
+ else if (gm_state->gm_tuple_buffers[reader].nTuples >
+ gm_state->gm_tuple_buffers[reader].readCounter)
+ {
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+ }
+ /* reader exhausted? */
+ else if (gm_state->gm_tuple_buffers[reader].done)
+ {
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ nowait,
+ &tuple_buffer->done));
+
+ /*
+ * try to read more tuple in nowait mode and store it into the tuple
+ * array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, nowait, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..8a49801 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3362,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3649,6 +3692,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..53ca09d 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,75 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simpler then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+
+ /* Mark the path with the correct row estimate */
+ if (rows)
+ path->path.rows = *rows;
+ else if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = rel->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Consider leader as it always participate into gather merge scan.
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers + 1;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. For Gather Merge, require tuple
+ * to be read in wait mode from each worker, so considering some extra
+ * cost for the same.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..5fdc1bd 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,10 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+ int nworkers, Plan *subplan);
/*
@@ -463,6 +467,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -2246,6 +2254,89 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
+/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ gm_plan = make_gather_merge(subplan->targetlist,
+ NIL,
+ best_path->num_workers,
+ subplan);
+
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ if (pathkeys)
+ {
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /*
+ * Check that we got the same sort key information. We just Assert
+ * that the sortops match, since those depend only on the pathkeys;
+ * but it seems like a good idea to check the sort column numbers
+ * explicitly, to ensure the tlists really do match up.
+ */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ gm_plan->plan.lefttree = subplan;
+ }
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
/*****************************************************************************
*
@@ -5909,6 +6000,25 @@ make_gather(List *qptlist,
return node;
}
+static GatherMerge *
+make_gather_merge(List *qptlist,
+ List *qpqual,
+ int nworkers,
+ Plan *subplan)
+{
+ GatherMerge *node = makeNode(GatherMerge);
+ Plan *plan = &node->plan;
+
+ /* cost should be inserted by caller */
+ plan->targetlist = qptlist;
+ plan->qual = qpqual;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+ node->num_workers = nworkers;
+
+ return node;
+}
+
/*
* distinctList is a list of SortGroupClauses, identifying the targetlist
* items that should be considered by the SetOp filter. The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 644b8b6..ea86c09 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,61 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We generate a Gather path based on the cheapest partial path,
+ * and a GatherMerge path for each partial path that is properly sorted.
*/
if (grouped_rel->partial_pathlist)
{
Path *path = (Path *) linitial(grouped_rel->partial_pathlist);
double total_groups = path->rows * path->parallel_workers;
+ /*
+ * GatherMerge is always sorted, so if there is GROUP BY clause,
+ * try to generate GatherMerge path for each partial path.
+ */
+ if (parse->groupClause)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *gmpath = (Path *) lfirst(lc);
+ double total_groups = gmpath->rows * gmpath->parallel_workers;
+
+ if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+ continue;
+
+ /* create gather merge path */
+ gmpath = (Path *) create_gather_merge_path(root,
+ grouped_rel,
+ gmpath,
+ NULL,
+ root->group_pathkeys,
+ NULL,
+ &total_groups);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
+
path = (Path *) create_gather_path(root,
grouped_rel,
path,
@@ -3870,6 +3917,12 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * Partial pathlist generated for grouped relation are no further useful,
+ * so just reset it with null.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4166,6 +4219,38 @@ create_distinct_paths(PlannerInfo *root,
}
}
+ /*
+ * Generate GatherMerge path for each partial path.
+ */
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ double total_groups = path->rows * path->parallel_workers;
+
+ if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+ {
+ path = (Path *) create_sort_path(root, distinct_rel,
+ path,
+ needed_pathkeys,
+ -1.0);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ distinct_rel,
+ path,
+ NULL,
+ needed_pathkeys,
+ NULL,
+ &total_groups);
+ add_path(distinct_rel, (Path *)
+ create_upper_unique_path(root,
+ distinct_rel,
+ path,
+ list_length(root->distinct_pathkeys),
+ numDistinctRows));
+ }
+
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
@@ -4310,6 +4395,41 @@ create_ordered_paths(PlannerInfo *root,
ordered_rel->useridiscurrent = input_rel->useridiscurrent;
ordered_rel->fdwroutine = input_rel->fdwroutine;
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ bool is_sorted;
+ double total_groups = path->rows * path->parallel_workers;
+
+ is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+ path->pathkeys);
+ if (!is_sorted)
+ {
+ /* An explicit sort here can take advantage of LIMIT */
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ path,
+ root->sort_pathkeys,
+ limit_tuples);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ ordered_rel,
+ path,
+ target,
+ root->sort_pathkeys,
+ NULL,
+ &total_groups);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+
foreach(lc, input_rel->pathlist)
{
Path *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..07e1532 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer, double *rows)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost, rows);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..f605284 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..0c12e27 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per reader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..e9795f9 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..e986896 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..1df5861 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,11 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
List **restrict_clauses);
extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel, Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer,
+ double *rows);
#endif /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergeState
Gene
GenericCosts
GenericExprState
On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <thomas.munro@enterprisedb.com>
wrote:+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of CaliforniaShouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?Fixed.
The year also needs updating to 2016 in nodeGatherMerge.h.
+ /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?See cost_merge_append.
That just got tweaked in commit 34ca0905.
Looking at the plan I realize that this is happening because wrong costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.
In create_grouping_paths:
+ double total_groups = gmpath->rows *
gmpath->parallel_workers;
This hides a variable of the same name in the enclosing scope. Maybe confusing?
In some other places like create_ordered_paths:
+ double total_groups = path->rows * path->parallel_workers;
Though it probably made sense to use this variable name in
create_grouping_paths, wouldn't total_rows be better here?
It feels weird to be working back to a total row count estimate from
the partial one by simply multiplying by path->parallel_workers.
Gather Merge will underestimate the total rows when parallel_workers <
4, if using partial row estimates ultimately from cost_seqscan which
assume some leader contribution. I don't have a better idea though.
Reversing cost_seqscan's logic certainly doesn't seem right. I don't
know how to make them agree on the leader's contribution AND give
principled answers, since there seems to be some kind of cyclic
dependency in the costing logic (cost_seqscan really needs to be given
a leader contribution estimate from its superpath which knows whether
it will allow the leader to pull tuples greedily/fairly or not, but
that superpath hasn't been created yet; cost_gather_merge needs the
row count from its subpath). Or maybe I'm just confused.
--
Thomas Munro
http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Nov 14, 2016 at 3:51 PM, Thomas Munro <thomas.munro@enterprisedb.com
wrote:
On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <
thomas.munro@enterprisedb.com>
wrote:
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development
Group
+ * Portions Copyright (c) 1994, Regents of the University of California
Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?Fixed.
The year also needs updating to 2016 in nodeGatherMerge.h.
Oops sorry, fixed now.
+ /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?See cost_merge_append.
That just got tweaked in commit 34ca0905.
Fixed.
Looking at the plan I realize that this is happening because wrong
costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.In create_grouping_paths:
+ double total_groups = gmpath->rows *
gmpath->parallel_workers;This hides a variable of the same name in the enclosing scope. Maybe
confusing?In some other places like create_ordered_paths:
+ double total_groups = path->rows * path->parallel_workers;Though it probably made sense to use this variable name in
create_grouping_paths, wouldn't total_rows be better here?
Initially I just copied from the other places. I agree with you that
create_ordered_paths - total_rows make more sense.
It feels weird to be working back to a total row count estimate from
the partial one by simply multiplying by path->parallel_workers.
Gather Merge will underestimate the total rows when parallel_workers <
4, if using partial row estimates ultimately from cost_seqscan which
assume some leader contribution. I don't have a better idea though.
Reversing cost_seqscan's logic certainly doesn't seem right. I don't
know how to make them agree on the leader's contribution AND give
principled answers, since there seems to be some kind of cyclic
dependency in the costing logic (cost_seqscan really needs to be given
a leader contribution estimate from its superpath which knows whether
it will allow the leader to pull tuples greedily/fairly or not, but
that superpath hasn't been created yet; cost_gather_merge needs the
row count from its subpath). Or maybe I'm just confused.
Yes, I agree with you. But we can't really do changes into cost_seqscan.
Another option I can think of is just calculate the rows for gather merge,
by using the reverse formula which been used into cost_seqscan. So
we can completely remote the rows argument from the
create_gather_merge_path()
and then inside create_gather_merge_path() - calculate the total_rows
using same formula which been used into cost_seqscan. This is working
fine - but not quite sure about the approach. So I attached that part of
changes
as separate patch. Any suggestions?
--
Rushabh Lathia
www.EnterpriseDB.com
Attachments:
gather_merge_v4_minor_changes.patchapplication/x-download; name=gather_merge_v4_minor_changes.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..4b6410b
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,723 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the GatherMerge node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ * ExecEndGatherMerge - shut down the GatherMerge node
+ * ExecReScanGatherMerge - rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTupleBuffer;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initializing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ int i;
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+
+ /*
+ * Initialize the parallel context and workers on first execution. We do
+ * this on first execution rather than during node initialization, as it
+ * needs to allocate large dynamic segment, so it is better to do if it is
+ * really needed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize the workers required to execute Gather node. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /*
+ * Register backend workers. We might not get as many as we
+ * requested, or indeed any at all.
+ */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader =
+ palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate into gather merge */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple_buffers =
+ (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple_buffers[i].tuple =
+ (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the sort */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+ /*
+ * First try to read tuple for each worker (including leader) in nowait
+ * mode, so that we initialize read from each worker as well as leader.
+ * After this, if all active workers are unable to produce a tuple, then
+ * re-read and this time use wait mode. For workers that were able to
+ * produce a tuple in the earlier loop and are still active, just try to
+ * fill the tuple array if more tuples are avaiable.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple_buffers[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we don't need it any more */
+ pfree(gm_state->gm_tuple_buffers);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participant we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, false))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+ tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &tuple_buffer->done));
+ if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+ break;
+ tuple_buffer->nTuples++;
+ }
+}
+
+/*
+ * Attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * For gather merge we need to fill the slot from which we returned the earlier
+ * tuple, so this require tuple to be read in wait mode. During initialization
+ * phase, once we try to read the tuple in no-wait mode as we want to
+ * initialize all the readers. Refer gather_merge_init() for more details.
+ *
+ * Return true if found tuple for the reader, otherwise returns false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+ HeapTuple tup = NULL;
+
+ /* We here for leader? */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple_buffers[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+ /* Does tuple array have any available tuples? */
+ else if (gm_state->gm_tuple_buffers[reader].nTuples >
+ gm_state->gm_tuple_buffers[reader].readCounter)
+ {
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+ }
+ /* reader exhausted? */
+ else if (gm_state->gm_tuple_buffers[reader].done)
+ {
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ nowait,
+ &tuple_buffer->done));
+
+ /*
+ * try to read more tuple in nowait mode and store it into the tuple
+ * array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, nowait, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 04e49b7..2f52833 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4356,6 +4381,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 748b687..ac36e48 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3332,6 +3372,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3659,6 +3702,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index e42895d..e1bb6e2 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,75 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simpler then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+
+ /* Mark the path with the correct row estimate */
+ if (rows)
+ path->path.rows = *rows;
+ else if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = rel->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Consider leader as it always participate into gather merge scan.
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers + 1;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. For Gather Merge, require tuple
+ * to be read in wait mode from each worker, so considering some extra
+ * cost for the same.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..5fdc1bd 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,10 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+ int nworkers, Plan *subplan);
/*
@@ -463,6 +467,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -2246,6 +2254,89 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
+/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ gm_plan = make_gather_merge(subplan->targetlist,
+ NIL,
+ best_path->num_workers,
+ subplan);
+
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ if (pathkeys)
+ {
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /*
+ * Check that we got the same sort key information. We just Assert
+ * that the sortops match, since those depend only on the pathkeys;
+ * but it seems like a good idea to check the sort column numbers
+ * explicitly, to ensure the tlists really do match up.
+ */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ gm_plan->plan.lefttree = subplan;
+ }
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
/*****************************************************************************
*
@@ -5909,6 +6000,25 @@ make_gather(List *qptlist,
return node;
}
+static GatherMerge *
+make_gather_merge(List *qptlist,
+ List *qpqual,
+ int nworkers,
+ Plan *subplan)
+{
+ GatherMerge *node = makeNode(GatherMerge);
+ Plan *plan = &node->plan;
+
+ /* cost should be inserted by caller */
+ plan->targetlist = qptlist;
+ plan->qual = qpqual;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+ node->num_workers = nworkers;
+
+ return node;
+}
+
/*
* distinctList is a list of SortGroupClauses, identifying the targetlist
* items that should be considered by the SetOp filter. The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index d8c5dd3..9628479 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,60 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We generate a Gather path based on the cheapest partial path,
+ * and a GatherMerge path for each partial path that is properly sorted.
*/
if (grouped_rel->partial_pathlist)
{
Path *path = (Path *) linitial(grouped_rel->partial_pathlist);
double total_groups = path->rows * path->parallel_workers;
+ /*
+ * GatherMerge is always sorted, so if there is GROUP BY clause,
+ * try to generate GatherMerge path for each partial path.
+ */
+ if (parse->groupClause)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *gmpath = (Path *) lfirst(lc);
+
+ if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+ continue;
+
+ /* create gather merge path */
+ gmpath = (Path *) create_gather_merge_path(root,
+ grouped_rel,
+ gmpath,
+ NULL,
+ root->group_pathkeys,
+ NULL,
+ &total_groups);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
+
path = (Path *) create_gather_path(root,
grouped_rel,
path,
@@ -3870,6 +3916,12 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * Partial pathlist generated for grouped relation are no further useful,
+ * so just reset it with null.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4166,6 +4218,38 @@ create_distinct_paths(PlannerInfo *root,
}
}
+ /*
+ * Generate GatherMerge path for each partial path.
+ */
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ double total_groups = path->rows * path->parallel_workers;
+
+ if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+ {
+ path = (Path *) create_sort_path(root, distinct_rel,
+ path,
+ needed_pathkeys,
+ -1.0);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ distinct_rel,
+ path,
+ NULL,
+ needed_pathkeys,
+ NULL,
+ &total_groups);
+ add_path(distinct_rel, (Path *)
+ create_upper_unique_path(root,
+ distinct_rel,
+ path,
+ list_length(root->distinct_pathkeys),
+ numDistinctRows));
+ }
+
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
@@ -4310,6 +4394,41 @@ create_ordered_paths(PlannerInfo *root,
ordered_rel->useridiscurrent = input_rel->useridiscurrent;
ordered_rel->fdwroutine = input_rel->fdwroutine;
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ bool is_sorted;
+ double total_rows = path->rows * path->parallel_workers;
+
+ is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+ path->pathkeys);
+ if (!is_sorted)
+ {
+ /* An explicit sort here can take advantage of LIMIT */
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ path,
+ root->sort_pathkeys,
+ limit_tuples);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ ordered_rel,
+ path,
+ target,
+ root->sort_pathkeys,
+ NULL,
+ &total_rows);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+
foreach(lc, input_rel->pathlist)
{
Path *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d91bc3b..e9d6279 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..07e1532 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer, double *rows)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost, rows);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 3c695c1..4e9390e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..bf992cd
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..0c12e27 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per reader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index cb9307c..7edb114 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..e9795f9 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..e986896 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..1df5861 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,11 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
List **restrict_clauses);
extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel, Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer,
+ double *rows);
#endif /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergeState
Gene
GenericCosts
GenericExprState
gm_v4_plus_rows_estimate.patchapplication/x-download; name=gm_v4_plus_rows_estimate.patchDownload
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 9628479..93d9ed2 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3752,8 +3752,7 @@ create_grouping_paths(PlannerInfo *root,
gmpath,
NULL,
root->group_pathkeys,
- NULL,
- &total_groups);
+ NULL);
if (parse->hasAggs)
add_path(grouped_rel, (Path *)
@@ -4224,7 +4223,6 @@ create_distinct_paths(PlannerInfo *root,
foreach(lc, input_rel->partial_pathlist)
{
Path *path = (Path *) lfirst(lc);
- double total_groups = path->rows * path->parallel_workers;
if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
{
@@ -4240,8 +4238,7 @@ create_distinct_paths(PlannerInfo *root,
path,
NULL,
needed_pathkeys,
- NULL,
- &total_groups);
+ NULL);
add_path(distinct_rel, (Path *)
create_upper_unique_path(root,
distinct_rel,
@@ -4398,7 +4395,6 @@ create_ordered_paths(PlannerInfo *root,
{
Path *path = (Path *) lfirst(lc);
bool is_sorted;
- double total_rows = path->rows * path->parallel_workers;
is_sorted = pathkeys_contained_in(root->sort_pathkeys,
path->pathkeys);
@@ -4418,8 +4414,7 @@ create_ordered_paths(PlannerInfo *root,
path,
target,
root->sort_pathkeys,
- NULL,
- &total_rows);
+ NULL);
/* Add projection step if needed */
if (path->pathtarget != target)
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 07e1532..eb40e24 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1638,11 +1638,14 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
GatherMergePath *
create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
PathTarget *target, List *pathkeys,
- Relids required_outer, double *rows)
+ Relids required_outer)
{
GatherMergePath *pathnode = makeNode(GatherMergePath);
Cost input_startup_cost = 0;
Cost input_total_cost = 0;
+ double total_rows;
+ double parallel_divisor = subpath->parallel_workers;
+ double leader_contribution;
Assert(subpath->parallel_safe);
Assert(pathkeys);
@@ -1657,7 +1660,16 @@ create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
pathnode->num_workers = subpath->parallel_workers;
pathnode->path.pathkeys = pathkeys;
pathnode->path.pathtarget = target ? target : rel->reltarget;
- pathnode->path.rows += subpath->rows;
+
+ /*
+ * Calculate the total_rows for gather merge, by considering the leader
+ * contribution to the execution. This is similar to how cost_seqscan
+ * estimate the rows for the partial path.
+ */
+ leader_contribution = 1.0 - (0.3 * subpath->parallel_workers);
+ if (leader_contribution > 0)
+ parallel_divisor += leader_contribution;
+ total_rows = clamp_row_est(subpath->rows * parallel_divisor);
if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
{
@@ -1684,7 +1696,7 @@ create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
- input_startup_cost, input_total_cost, rows);
+ input_startup_cost, input_total_cost, &total_rows);
return pathnode;
}
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 1df5861..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -271,7 +271,6 @@ extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath,
PathTarget *target,
List *pathkeys,
- Relids required_outer,
- double *rows);
+ Relids required_outer);
#endif /* PATHNODE_H */
On Wed, Nov 16, 2016 at 3:10 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
On Mon, Nov 14, 2016 at 3:51 PM, Thomas Munro <
thomas.munro@enterprisedb.com> wrote:On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <
thomas.munro@enterprisedb.com>
wrote:
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development
Group
+ * Portions Copyright (c) 1994, Regents of the University of
California
Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?Fixed.
The year also needs updating to 2016 in nodeGatherMerge.h.
Oops sorry, fixed now.
+ /* Per-tuple heap maintenance cost */ + run_cost += path->path.rows * comparison_cost * 2.0 * logN;Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?See cost_merge_append.
That just got tweaked in commit 34ca0905.
Fixed.
Looking at the plan I realize that this is happening because wrong
costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.In create_grouping_paths:
+ double total_groups = gmpath->rows *
gmpath->parallel_workers;This hides a variable of the same name in the enclosing scope. Maybe
confusing?In some other places like create_ordered_paths:
+ double total_groups = path->rows * path->parallel_workers;Though it probably made sense to use this variable name in
create_grouping_paths, wouldn't total_rows be better here?Initially I just copied from the other places. I agree with you that
create_ordered_paths - total_rows make more sense.It feels weird to be working back to a total row count estimate from
the partial one by simply multiplying by path->parallel_workers.
Gather Merge will underestimate the total rows when parallel_workers <
4, if using partial row estimates ultimately from cost_seqscan which
assume some leader contribution. I don't have a better idea though.
Reversing cost_seqscan's logic certainly doesn't seem right. I don't
know how to make them agree on the leader's contribution AND give
principled answers, since there seems to be some kind of cyclic
dependency in the costing logic (cost_seqscan really needs to be given
a leader contribution estimate from its superpath which knows whether
it will allow the leader to pull tuples greedily/fairly or not, but
that superpath hasn't been created yet; cost_gather_merge needs the
row count from its subpath). Or maybe I'm just confused.Yes, I agree with you. But we can't really do changes into cost_seqscan.
Another option I can think of is just calculate the rows for gather merge,
by using the reverse formula which been used into cost_seqscan. So
we can completely remote the rows argument from the
create_gather_merge_path()
and then inside create_gather_merge_path() - calculate the total_rows
using same formula which been used into cost_seqscan. This is working
fine - but not quite sure about the approach. So I attached that part of
changes
as separate patch. Any suggestions?
With offline discussion with Thomas, I realized that this won't work. It
will
work only if the subplan is seqscan - so this logic won't be enough to
estimate the rows. I guess as Thomas told earlier, this is not problem
with GatherMerge implementation as such - so we will keep this as separate.
Apart from this my colleague Neha Sharma, reported the server crash with
the patch.
It was hitting below Assert into create_gather_merge_path().
Assert(pathkeys);
Basically when query is something like "select * from foo where a = 1 order
by a;"
here query has sortclause but planner won't generate sort key because
where equality clause is on same column. Fix is about making sure of
pathkeys
before calling create_gather_merge_path().
PFA latest patch with fix as well as few cosmetic changes.
--
Rushabh Lathia
www.EnterpriseDB.com
--
Rushabh Lathia
Attachments:
gather_merge_v5.patchbinary/octet-stream; name=gather_merge_v5.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..7e77fc2
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,723 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the GatherMerge node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ * ExecEndGatherMerge - shut down the GatherMerge node
+ * ExecReScanGatherMerge - rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTupleBuffer;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initializing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+ int i;
+
+ /*
+ * Initialize the parallel context and workers on first execution. We do
+ * this on first execution rather than during node initialization, as it
+ * needs to allocate large dynamic segment, so it is better to do if it is
+ * really needed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize the workers required to execute Gather node. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /*
+ * Register backend workers. We might not get as many as we
+ * requested, or indeed any at all.
+ */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader =
+ palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate into gather merge */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple_buffers =
+ (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple_buffers[i].tuple =
+ (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the sort */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+ /*
+ * First try to read tuple for each worker (including leader) in nowait
+ * mode, so that we initialize read from each worker as well as leader.
+ * After this, if all active workers are unable to produce a tuple, then
+ * re-read and this time use wait mode. For workers that were able to
+ * produce a tuple in the earlier loop and are still active, just try to
+ * fill the tuple array if more tuples are avaiable.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple_buffers[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we don't need it any more */
+ pfree(gm_state->gm_tuple_buffers);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participant we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, false))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+ tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &tuple_buffer->done));
+ if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+ break;
+ tuple_buffer->nTuples++;
+ }
+}
+
+/*
+ * Attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * For gather merge we need to fill the slot from which we returned the earlier
+ * tuple, so this require tuple to be read in wait mode. During initialization
+ * phase, once we try to read the tuple in no-wait mode as we want to
+ * initialize all the readers. Refer gather_merge_init() for more details.
+ *
+ * Return true if found tuple for the reader, otherwise returns false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+ HeapTuple tup = NULL;
+
+ /* We here for leader? */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple_buffers[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+ /* Does tuple array have any available tuples? */
+ else if (gm_state->gm_tuple_buffers[reader].nTuples >
+ gm_state->gm_tuple_buffers[reader].readCounter)
+ {
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+ }
+ /* reader exhausted? */
+ else if (gm_state->gm_tuple_buffers[reader].done)
+ {
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ nowait,
+ &tuple_buffer->done));
+
+ /*
+ * try to read more tuple in nowait mode and store it into the tuple
+ * array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, nowait, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 04e49b7..2f52833 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4356,6 +4381,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 748b687..ac36e48 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3332,6 +3372,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3659,6 +3702,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index e42895d..9c1e578 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,74 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simpler then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (rows)
+ path->path.rows = *rows;
+ else if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = rel->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Consider leader as it always participate into gather merge scan.
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers + 1;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. For Gather Merge, require tuple
+ * to be read in wait mode from each worker, so considering some extra
+ * cost for the same.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..5fdc1bd 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,10 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+ int nworkers, Plan *subplan);
/*
@@ -463,6 +467,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -2246,6 +2254,89 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
+/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ gm_plan = make_gather_merge(subplan->targetlist,
+ NIL,
+ best_path->num_workers,
+ subplan);
+
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ if (pathkeys)
+ {
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /*
+ * Check that we got the same sort key information. We just Assert
+ * that the sortops match, since those depend only on the pathkeys;
+ * but it seems like a good idea to check the sort column numbers
+ * explicitly, to ensure the tlists really do match up.
+ */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ gm_plan->plan.lefttree = subplan;
+ }
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
/*****************************************************************************
*
@@ -5909,6 +6000,25 @@ make_gather(List *qptlist,
return node;
}
+static GatherMerge *
+make_gather_merge(List *qptlist,
+ List *qpqual,
+ int nworkers,
+ Plan *subplan)
+{
+ GatherMerge *node = makeNode(GatherMerge);
+ Plan *plan = &node->plan;
+
+ /* cost should be inserted by caller */
+ plan->targetlist = qptlist;
+ plan->qual = qpqual;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+ node->num_workers = nworkers;
+
+ return node;
+}
+
/*
* distinctList is a list of SortGroupClauses, identifying the targetlist
* items that should be considered by the SetOp filter. The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index a8847de..3c5ca3b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,61 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We generate a Gather path based on the cheapest partial path,
+ * and a GatherMerge path for each partial path that is properly sorted.
*/
if (grouped_rel->partial_pathlist)
{
Path *path = (Path *) linitial(grouped_rel->partial_pathlist);
double total_groups = path->rows * path->parallel_workers;
+ /*
+ * GatherMerge is always sorted, so if there is GROUP BY clause,
+ * try to generate GatherMerge path for each partial path.
+ */
+ if (parse->groupClause && root->group_pathkeys)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *gmpath = (Path *) lfirst(lc);
+ double total_groups = gmpath->rows * gmpath->parallel_workers;
+
+ if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+ continue;
+
+ /* create gather merge path */
+ gmpath = (Path *) create_gather_merge_path(root,
+ grouped_rel,
+ gmpath,
+ NULL,
+ root->group_pathkeys,
+ NULL,
+ &total_groups);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
+
path = (Path *) create_gather_path(root,
grouped_rel,
path,
@@ -3870,6 +3917,12 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * Partial pathlist generated for grouped relation are no further useful,
+ * so just reset it with null.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4166,6 +4219,42 @@ create_distinct_paths(PlannerInfo *root,
}
}
+ /*
+ * Generate GatherMerge path for each partial path.
+ */
+ if (needed_pathkeys)
+ {
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ double total_groups = path->rows * path->parallel_workers;
+
+ if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+ {
+ path = (Path *) create_sort_path(root,
+ distinct_rel,
+ path,
+ needed_pathkeys,
+ -1.0);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ distinct_rel,
+ path,
+ NULL,
+ needed_pathkeys,
+ NULL,
+ &total_groups);
+ add_path(distinct_rel, (Path *)
+ create_upper_unique_path(root,
+ distinct_rel,
+ path,
+ list_length(root->distinct_pathkeys),
+ numDistinctRows));
+ }
+ }
+
/* For explicit-sort case, always use the more rigorous clause */
if (list_length(root->distinct_pathkeys) <
list_length(root->sort_pathkeys))
@@ -4180,15 +4269,17 @@ create_distinct_paths(PlannerInfo *root,
path = cheapest_input_path;
if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
- path = (Path *) create_sort_path(root, distinct_rel,
+ path = (Path *) create_sort_path(root,
+ distinct_rel,
path,
needed_pathkeys,
-1.0);
add_path(distinct_rel, (Path *)
- create_upper_unique_path(root, distinct_rel,
+ create_upper_unique_path(root,
+ distinct_rel,
path,
- list_length(root->distinct_pathkeys),
+ list_length(root->distinct_pathkeys),
numDistinctRows));
}
@@ -4310,6 +4401,45 @@ create_ordered_paths(PlannerInfo *root,
ordered_rel->useridiscurrent = input_rel->useridiscurrent;
ordered_rel->fdwroutine = input_rel->fdwroutine;
+ /* sort_pathkeys present? - try to generate the gather merge path */
+ if (root->sort_pathkeys)
+ {
+ foreach(lc, input_rel->partial_pathlist)
+ {
+ Path *path = (Path *) lfirst(lc);
+ bool is_sorted;
+ double total_groups = path->rows * path->parallel_workers;
+
+ is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+ path->pathkeys);
+ if (!is_sorted)
+ {
+ /* An explicit sort here can take advantage of LIMIT */
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ path,
+ root->sort_pathkeys,
+ limit_tuples);
+ }
+
+ /* create gather merge path */
+ path = (Path *) create_gather_merge_path(root,
+ ordered_rel,
+ path,
+ target,
+ root->sort_pathkeys,
+ NULL,
+ &total_groups);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+ }
+
foreach(lc, input_rel->pathlist)
{
Path *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d91bc3b..e9d6279 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 6d3ccfd..b4a49d8 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer, double *rows)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost, rows);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index da74f00..8937032 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..0c12e27 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per reader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index cb9307c..7edb114 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..e9795f9 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..e986896 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..1df5861 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,11 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
List **restrict_clauses);
extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel, Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer,
+ double *rows);
#endif /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergeState
Gene
GenericCosts
GenericExprState
On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
PFA latest patch with fix as well as few cosmetic changes.
Moved to next CF with "needs review" status.
Regards,
Hari Babu
Fujitsu Australia
On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:
On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:PFA latest patch with fix as well as few cosmetic changes.
Moved to next CF with "needs review" status.
I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:
* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.
Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtFySXN=jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.com
With that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:
Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms
I haven't really had time to dig into these results yet, so I'm not
sure how "real" these numbers are and how much is run-to-run jitter,
EXPLAIN ANALYZE distortion, or whatever. I think this overall concept
is good, because there should be cases where it's substantially
cheaper to preserve the order while gathering tuples from workers than
to re-sort afterwards. But this particular set of results is a bit
lackluster.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
gather-merge-v6.patchapplication/x-download; name=gather-merge-v6.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 30dd54c..48d95cd 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3455,6 +3455,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
</listitem>
</varlistentry>
+ <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+ <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+ <indexterm>
+ <primary><varname>enable_gathermerge</> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Enables or disables the query planner's use of gather
+ merge plan types. The default is <literal>on</>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
<term><varname>enable_hashagg</varname> (<type>boolean</type>)
<indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c762fb0..3beb79e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapAnd.o nodeBitmapOr.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index b8edd36..98baaf3 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
#include "executor/nodeModifyTable.h"
#include "executor/nodeNestloop.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeRecursiveunion.h"
#include "executor/nodeResult.h"
#include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..00b74b9
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,718 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+ bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+ bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ gm_state->ps.ps_TupFromTlist = false;
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys =
+ palloc0(sizeof(SortSupportData) * node->numCols);
+
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initializing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ TupleTableSlot *slot;
+ TupleTableSlot *resultSlot;
+ ExprDoneCond isDone;
+ ExprContext *econtext;
+ int i;
+
+ /*
+ * As with Gather, we don't launch workers until this node is actually
+ * executed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize data structures for workers. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /* Try to launch workers. */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader = palloc(pcxt->nworkers_launched *
+ sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->ps.ps_TupFromTlist)
+ {
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+ if (isDone == ExprMultipleResult)
+ return resultSlot;
+ /* Done with that source tuple... */
+ node->ps.ps_TupFromTlist = false;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note we can't do this
+ * until we're done projecting.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /* Get and return the next tuple, projecting if necessary. */
+ for (;;)
+ {
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+ if (isDone != ExprEndResult)
+ {
+ node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+ return resultSlot;
+ }
+ }
+
+ return slot;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple_buffers =
+ (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+ (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple_buffers[i].tuple =
+ (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the merge */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+ heap_compare_slots,
+ gm_state);
+
+ /*
+ * First, try to read a tuple from each worker (including leader) in
+ * nowait mode, so that we initialize read from each worker as well as
+ * leader. After this, if all active workers are unable to produce a
+ * tuple, then re-read and this time use wait mode. For workers that were
+ * able to produce a tuple in the earlier loop and are still active, just
+ * try to fill the tuple array if more tuples are avaiable.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple_buffers[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we don't need it any more */
+ pfree(gm_state->gm_tuple_buffers);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participant we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, false))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+ tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &tuple_buffer->done));
+ if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+ break;
+ tuple_buffer->nTuples++;
+ }
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+ GMReaderTupleBuffer *tuple_buffer;
+ HeapTuple tup = NULL;
+
+ /*
+ * If we're being asked to generate a tuple from the leader, then we
+ * just call ExecProcNode as normal to produce one.
+ */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple_buffers[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+
+ /* Otherwise, check the state of the relevant tuple buffer. */
+ tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+ {
+ /* Return any tuple previously read that is still buffered. */
+ tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+ }
+ else if (tuple_buffer->done)
+ {
+ /* Reader is known to be exhausted. */
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ /* Read and buffer next tuple. */
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ nowait,
+ &tuple_buffer->done));
+
+ /*
+ * Attempt to read more tuples in nowait mode and store them in
+ * the tuple array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+ bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext;
+
+ tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, nowait, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 930f2f1..bd009c4 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4421,6 +4446,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 806d0a9..c648bed 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3377,6 +3417,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3704,6 +3747,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dc40d01..20797f0 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2077,6 +2077,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2509,6 +2529,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 46d7d06..d909ba4 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1999,39 +1999,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
/*
* generate_gather_paths
- * Generate parallel access paths for a relation by pushing a Gather on
- * top of a partial path.
+ * Generate parallel access paths for a relation by pushing a Gather or
+ * Gather Merge on top of a partial path.
*
* This must not be called until after we're done creating all partial paths
* for the specified relation. (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
*/
void
generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
{
Path *cheapest_partial_path;
Path *simple_gather_path;
+ ListCell *lc;
/* If there are no partial paths, there's nothing to do here. */
if (rel->partial_pathlist == NIL)
return;
/*
- * The output of Gather is currently always unsorted, so there's only one
- * partial path of interest: the cheapest one. That will be the one at
- * the front of partial_pathlist because of the way add_partial_path
- * works.
- *
- * Eventually, we should have a Gather Merge operation that can merge
- * multiple tuple streams together while preserving their ordering. We
- * could usefully generate such a path from each partial path that has
- * non-NIL pathkeys.
+ * The output of Gather is always unsorted, so there's only one partial
+ * path of interest: the cheapest one. That will be the one at the front
+ * of partial_pathlist because of the way add_partial_path works.
*/
cheapest_partial_path = linitial(rel->partial_pathlist);
simple_gather_path = (Path *)
create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
NULL, NULL);
add_path(rel, simple_gather_path);
+
+ /*
+ * For each useful ordering, we can consider an order-preserving Gather
+ * Merge.
+ */
+ foreach (lc, rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ GatherMergePath *path;
+
+ if (subpath->pathkeys == NIL)
+ continue;
+
+ path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+ subpath->pathkeys, NULL, NULL);
+ add_path(rel, &path->path);
+ }
}
/*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index a52eb7e..dfc3b78 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -391,6 +392,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (rows)
+ path->path.rows = *rows;
+ else if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = rel->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Add one to the number of workers to account for the leader. This might
+ * be overgenerous since the leader will do less work than other workers
+ * in typical cases, but we'll go with it for now.
+ */
+ Assert(path->num_workers > 0);
+ N = (double) path->num_workers + 1;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. Since Gather Merge, unlike
+ * Gather, requires us to block until a tuple is available from every
+ * worker, we bump the IPC cost up a little bit as compared with Gather.
+ * For lack of a better idea, charge an extra 5%.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index c7bcd9b..5dec091 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
/*
@@ -463,6 +465,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -1408,6 +1414,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
}
/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather Merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ /* As with Gather, it's best to project away columns in the workers. */
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ /* See create_merge_append_plan for why there's no make_xxx function */
+ gm_plan = makeNode(GatherMerge);
+ gm_plan->plan.targetlist = subplan->targetlist;
+ gm_plan->num_workers = best_path->num_workers;
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ /* Gather Merge is pointless with no pathkeys; use Gather instead. */
+ Assert(pathkeys != NIL);
+
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /* As for MergeAppend, check that we got the same sort key information. */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ /* Now insert the subplan under GatherMerge. */
+ gm_plan->plan.lefttree = subplan;
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
+
+/*
* create_projection_plan
*
* Create a plan tree to do a projection step and (recursively) plans
@@ -2246,7 +2332,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
-
/*****************************************************************************
*
* BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 207290f..fdcee75 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3722,8 +3722,7 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We can do this using either Gather or Gather Merge.
*/
if (grouped_rel->partial_pathlist)
{
@@ -3770,6 +3769,70 @@ create_grouping_paths(PlannerInfo *root,
parse->groupClause,
(List *) parse->havingQual,
dNumGroups));
+
+ /*
+ * The point of using Gather Merge rather than Gather is that it
+ * can preserve the ordering of the input path, so there's no
+ * reason to try it unless (1) it's possible to produce more than
+ * one output row and (2) we want the output path to be ordered.
+ */
+ if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *gmpath;
+ double total_groups;
+
+ /*
+ * It's useful to consider paths that are already properly
+ * ordered for Gather Merge, because those don't need a
+ * sort. It's also useful to consider the cheapest path,
+ * because sorting it in parallel and then doing Gather
+ * Merge may be better than doing an unordered Gather
+ * followed by a sort. But there's no point in
+ * considering non-cheapest paths that aren't already
+ * sorted correctly.
+ */
+ if (path != subpath &&
+ !pathkeys_contained_in(root->group_pathkeys,
+ subpath->pathkeys))
+ continue;
+
+ total_groups = subpath->rows * subpath->parallel_workers;
+
+ gmpath = (Path *)
+ create_gather_merge_path(root,
+ grouped_rel,
+ subpath,
+ NULL,
+ root->group_pathkeys,
+ NULL,
+ &total_groups);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
}
}
@@ -3867,6 +3930,16 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * We've been using the partial pathlist for the grouped relation to hold
+ * partially aggregated paths, but that's actually a little bit bogus
+ * because it's unsafe for later planning stages -- like ordered_rel ---
+ * to get the idea that they can use these partial paths as if they didn't
+ * need a FinalizeAggregate step. Zap the partial pathlist at this stage
+ * so we don't get confused.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4336,6 +4409,50 @@ create_ordered_paths(PlannerInfo *root,
}
/*
+ * generate_gather_paths() will have already generated a simple Gather
+ * path for the best parallel path, if any, and the loop above will have
+ * considered sorting it. Similarly, generate_gather_paths() will also
+ * have generated order-preserving Gather Merge plans which can be used
+ * without sorting if they happen to match the sort_pathkeys, and the loop
+ * above will have handled those as well. However, there's one more
+ * possibility: it may make sense to sort the cheapest partial path
+ * according to the required output order and then use Gather Merge.
+ */
+ if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+ input_rel->partial_pathlist != NIL)
+ {
+ Path *cheapest_partial_path;
+
+ cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+ /*
+ * If cheapest partial path doesn't need a sort, this is redundant
+ * with what's already been tried.
+ */
+ if (!pathkeys_contained_in(root->sort_pathkeys,
+ cheapest_partial_path->pathkeys))
+ {
+ Path *path;
+ double total_groups;
+
+ total_groups = cheapest_partial_path->rows *
+ cheapest_partial_path->parallel_workers;
+ path = (Path *)
+ create_gather_merge_path(root, ordered_rel,
+ cheapest_partial_path,
+ target, root->sort_pathkeys, NULL,
+ &total_groups);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+ }
+
+ /*
* If there is an FDW that's responsible for all baserels of the query,
* let it consider adding ForeignPaths.
*/
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 413a0d9..0e15fbf 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index aad0b68..76aee75 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2685,6 +2685,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 3b7c56d..27b5e52 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer, double *rows)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost, rows);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5b23dbf..9d8b8b0 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -893,6 +893,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ce13bf7..7c2e0c2 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1997,6 +1997,35 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan which produces sorted output in each worker, and then
+ * merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per
+ * reader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index a1bb0ac..3df7603 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_SortPath,
T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 692a626..0022a5b 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index e1d31c7..ea0ed32 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 39376ec..7ceb4ca 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index d16f879..63d2de9 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -76,6 +76,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
extern GatherPath *create_gather_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath, PathTarget *target,
Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer,
+ double *rows);
extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath,
List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 993880d..5633386 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -777,6 +777,9 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
Gene
GenericCosts
GenericExprState
On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <
rushabh.lathia@gmail.com>
wrote:
PFA latest patch with fix as well as few cosmetic changes.
Moved to next CF with "needs review" status.
I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.
Thanks Robert for hacking into this.
Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtFySXN=
jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.comWith that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms
This is strange, I will re-run the test again and post the results soon.
I haven't really had time to dig into these results yet, so I'm not
sure how "real" these numbers are and how much is run-to-run jitter,
EXPLAIN ANALYZE distortion, or whatever. I think this overall concept
is good, because there should be cases where it's substantially
cheaper to preserve the order while gathering tuples from workers than
to re-sort afterwards. But this particular set of results is a bit
lackluster.--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Rushabh Lathia
On Fri, Jan 13, 2017 at 10:52 AM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas@gmail.com>
wrote:On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <
rushabh.lathia@gmail.com>
wrote:
PFA latest patch with fix as well as few cosmetic changes.
Moved to next CF with "needs review" status.
I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.Thanks Robert for hacking into this.
Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtF
ySXN=jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.comWith that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 msThis is strange, I will re-run the test again and post the results soon.
Here is the benchmark number which I got with the latest (v6) patch:
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.conf
Timing with v6 patch on REL9_6_STABLE branch
(last commit: 8a70d8ae7501141d283e56b31e10c66697c986d5).
Query 3: 49888.300 -> 45914.426
Query 4: 8531.939 -> 7790.498
Query 5: 40668.920 -> 38403.658
Query 9: 90922.825 -> 50277.646
Query 10: 45776.445 -> 39189.086
Query 12: 21644.593 -> 21180.402
Query 15: 63889.061 -> 62027.351
Query 17: 142208.463 -> 118704.424
Query 18: 244051.155 -> 186498.456
Query 20: 212046.605 -> 159360.520
Timing with v6 patch on master branch:
(last commit: 0777f7a2e8e0a51f0f60cfe164d538bb459bf9f2)
Query 3: 45261.722 -> 43499.739
Query 4: 7444.630 -> 6363.999
Query 5: 37146.458 -> 37081.952
Query 9: 88874.243 -> 50232.088
Query 10: 43583.133 -> 38118.195
Query 12: 19918.149 -> 20414.114
Query 15: 62554.860 -> 61039.048
Query 17: 131369.235 -> 111587.287
Query 18: 246162.686 -> 195434.292
Query 20: 201221.952 -> 169093.834
Looking at this results it seems like patch is good to go ahead.
I did notice that with your tpch run, query 9, 18. 20 were unable to pick
gather merge plan and that are the queries which are the most benefited
with gather merge. On another observation, if the work_mem set to high
then some queries end up picking Hash Aggregate - even though gather
merge performs better (I manually tested that by forcing gather merge).
I am still looking into this issue.
Thanks,
--
Rushabh Lathia
www.EnterpriseDB.com
On Tue, Jan 17, 2017 at 5:19 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
On Fri, Jan 13, 2017 at 10:52 AM, Rushabh Lathia <rushabh.lathia@gmail.com
wrote:
On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas@gmail.com>
wrote:On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <
rushabh.lathia@gmail.com>
wrote:
PFA latest patch with fix as well as few cosmetic changes.
Moved to next CF with "needs review" status.
I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.Thanks Robert for hacking into this.
Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtF
ySXN=jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.comWith that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 msThis is strange, I will re-run the test again and post the results soon.
Here is the benchmark number which I got with the latest (v6) patch:
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.confTiming with v6 patch on REL9_6_STABLE branch
(last commit: 8a70d8ae7501141d283e56b31e10c66697c986d5).Query 3: 49888.300 -> 45914.426
Query 4: 8531.939 -> 7790.498
Query 5: 40668.920 -> 38403.658
Query 9: 90922.825 -> 50277.646
Query 10: 45776.445 -> 39189.086
Query 12: 21644.593 -> 21180.402
Query 15: 63889.061 -> 62027.351
Query 17: 142208.463 -> 118704.424
Query 18: 244051.155 -> 186498.456
Query 20: 212046.605 -> 159360.520Timing with v6 patch on master branch:
(last commit: 0777f7a2e8e0a51f0f60cfe164d538bb459bf9f2)Query 3: 45261.722 -> 43499.739
Query 4: 7444.630 -> 6363.999
Query 5: 37146.458 -> 37081.952
Query 9: 88874.243 -> 50232.088
Query 10: 43583.133 -> 38118.195
Query 12: 19918.149 -> 20414.114
Query 15: 62554.860 -> 61039.048
Query 17: 131369.235 -> 111587.287
Query 18: 246162.686 -> 195434.292
Query 20: 201221.952 -> 169093.834Looking at this results it seems like patch is good to go ahead.
I did notice that with your tpch run, query 9, 18. 20 were unable to pick
gather merge plan and that are the queries which are the most benefited
with gather merge. On another observation, if the work_mem set to high
then some queries end up picking Hash Aggregate - even though gather
merge performs better (I manually tested that by forcing gather merge).
I am still looking into this issue.
I am able to reproduce the issue with smaller case, where gather merge
is not getting pick against hash aggregate.
Consider the following cases:
Testcase setup:
1) ./db/bin/pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
Example:
postgres=# show shared_buffers ;
shared_buffers
----------------
1GB
(1 row)
postgres=# show work_mem ;
work_mem
----------
64MB
(1 row)
1) Case 1:
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
------------------------------------------------------------
----------------------------------------------------------------------
HashAggregate (cost=62081.49..64108.32 rows=202683 width=12) (actual
time=1017.802..1079.324 rows=200000 loops=1)
Group Key: aid
-> Seq Scan on pgbench_accounts (cost=0.00..61068.07 rows=202683
width=8) (actual time=738.439..803.310 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.189 ms
Execution time: 1094.933 ms
(7 rows)
2) Case 2:
postgres=# set enable_hashagg = off;
SET
postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
------------------------------------------------------------
----------------------------------------------------------------------------
GroupAggregate (cost=78933.43..82480.38 rows=202683 width=12) (actual
time=980.983..1097.461 rows=200000 loops=1)
Group Key: aid
-> Sort (cost=78933.43..79440.14 rows=202683 width=8) (actual
time=980.975..1006.891 rows=200000 loops=1)
Sort Key: aid
Sort Method: quicksort Memory: 17082kB
-> Seq Scan on pgbench_accounts (cost=0.00..61068.07 rows=202683
width=8) (actual time=797.553..867.359 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.152 ms
Execution time: 1111.742 ms
(10 rows)
3) Case 3:
postgres=# set enable_hashagg = off;
SET
postgres=# set enable_gathermerge = on;
SET
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY PLAN
------------------------------------------------------------
------------------------------------------------------------
-----------------------------------
Finalize GroupAggregate (cost=47276.23..76684.51 rows=202683 width=12)
(actual time=287.383..542.064 rows=200000 loops=1)
Group Key: aid
-> Gather Merge (cost=47276.23..73644.26 rows=202684 width=0) (actual
time=287.375..441.698 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Partial GroupAggregate (cost=46276.17..47162.91 rows=50671
width=12) (actual time=278.801..305.772 rows=40000 loops=5)
Group Key: aid
-> Sort (cost=46276.17..46402.85 rows=50671 width=8)
(actual time=278.792..285.111 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 9841kB
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..42316.52 rows=50671 width=8) (actual time=206.602..223.203
rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.251 ms
Execution time: 553.569 ms
(15 rows)
Now, in the above case we can clearly see that GM is perform way better -
but
still planner choosing HashAggregate - because cost of hash aggregate is low
compare to GM.
Another observation is, HashAggregate (case 1) is performs better compare to
GroupAggregate (case 2), but still it doesn't justify the cost difference
of those two.
-- Cost difference
postgres=# select (82480.38 - 64108.32)/64108.32;
?column?
------------------------
0.28657840355198825987
(1 row)
-- Execution time
postgres=# select (1111.742 - 1094.933) / 1094.933;
?column?
------------------------
0.01535162425463475847
(1 row)
Might be a problem that HashAggregate costing or something else. I am
still looking into this problem.
--
Rushabh Lathia
www.EnterpriseDB.com
Show quoted text
On Tue, Jan 17, 2017 at 5:19 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
Here is the benchmark number which I got with the latest (v6) patch:
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.conf
You haven't mentioned scale factor used in these tests.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 17, 2017 at 4:26 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
Another observation is, HashAggregate (case 1) is performs better compare to
GroupAggregate (case 2), but still it doesn't justify the cost difference of
those two.
It may not be the only issue, or even the main issue, but I'm fairly
suspicious of the fact that cost_sort() doesn't distinguish between
the comparison cost of text and int4, for example.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 17, 2017 at 6:44 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:
On Tue, Jan 17, 2017 at 5:19 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:Here is the benchmark number which I got with the latest (v6) patch:
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - serveris
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3executions.
- power2 machine with 512GB of RAM
- With default postgres.confYou haven't mentioned scale factor used in these tests.
Oops sorry. Those results are for scale factor 10.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Rushabh Lathia
On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:
The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:
On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I am sorry for the delay, here is the latest re-based patch.
my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:
explain analyze select '' AS "xxx" from pgbench_accounts where filler like
'%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms
This patch also fix that issue.
On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <michael.paquier@gmail.com>
wrote:
On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael
--
Rushabh Lathia
Attachments:
gather-merge-v7.patchbinary/octet-stream; name=gather-merge-v7.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fb5d647..6959b51 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3496,6 +3496,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
</listitem>
</varlistentry>
+ <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+ <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+ <indexterm>
+ <primary><varname>enable_gathermerge</> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Enables or disables the query planner's use of gather
+ merge plan types. The default is <literal>on</>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
<term><varname>enable_hashagg</varname> (<type>boolean</type>)
<indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index f9fb276..570b26e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -908,6 +908,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1397,6 +1400,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 2a2b7eb..c95747e 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
nodeCustom.o nodeFunctionscan.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 0dd95c6..f00496b 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
#include "executor/nodeForeignscan.h"
#include "executor/nodeFunctionscan.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeGroup.h"
#include "executor/nodeHash.h"
#include "executor/nodeHashjoin.h"
@@ -320,6 +321,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -525,6 +531,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -687,6 +697,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -820,6 +834,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..84c1677
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+ bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+ bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys =
+ palloc0(sizeof(SortSupportData) * node->numCols);
+
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initializing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ TupleTableSlot *slot;
+ ExprContext *econtext;
+ int i;
+
+ /*
+ * As with Gather, we don't launch workers until this node is actually
+ * executed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize data structures for workers. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /* Try to launch workers. */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader = palloc(pcxt->nworkers_launched *
+ sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple_buffers =
+ (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+ (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple_buffers[i].tuple =
+ (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the merge */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+ heap_compare_slots,
+ gm_state);
+
+ /*
+ * First, try to read a tuple from each worker (including leader) in
+ * nowait mode, so that we initialize read from each worker as well as
+ * leader. After this, if all active workers are unable to produce a
+ * tuple, then re-read and this time use wait mode. For workers that were
+ * able to produce a tuple in the earlier loop and are still active, just
+ * try to fill the tuple array if more tuples are avaiable.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple_buffers[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we don't need it any more */
+ pfree(gm_state->gm_tuple_buffers);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participant we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, false))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+ tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &tuple_buffer->done));
+ if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+ break;
+ tuple_buffer->nTuples++;
+ }
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+ GMReaderTupleBuffer *tuple_buffer;
+ HeapTuple tup = NULL;
+
+ /*
+ * If we're being asked to generate a tuple from the leader, then we
+ * just call ExecProcNode as normal to produce one.
+ */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple_buffers[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+
+ /* Otherwise, check the state of the relevant tuple buffer. */
+ tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+ {
+ /* Return any tuple previously read that is still buffered. */
+ tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+ }
+ else if (tuple_buffer->done)
+ {
+ /* Reader is known to be exhausted. */
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ /* Read and buffer next tuple. */
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ nowait,
+ &tuple_buffer->done));
+
+ /*
+ * Attempt to read more tuples in nowait mode and store them in
+ * the tuple array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+ bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext;
+
+ tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, nowait, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 30d733e..943f495 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -359,6 +359,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4521,6 +4546,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 1560ac3..865ab5f 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -457,6 +457,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1984,6 +2013,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3409,6 +3449,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3739,6 +3782,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dcfa6ee..8dabde6 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2095,6 +2095,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2529,6 +2549,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 5c18987..824f09e 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2047,39 +2047,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
/*
* generate_gather_paths
- * Generate parallel access paths for a relation by pushing a Gather on
- * top of a partial path.
+ * Generate parallel access paths for a relation by pushing a Gather or
+ * Gather Merge on top of a partial path.
*
* This must not be called until after we're done creating all partial paths
* for the specified relation. (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
*/
void
generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
{
Path *cheapest_partial_path;
Path *simple_gather_path;
+ ListCell *lc;
/* If there are no partial paths, there's nothing to do here. */
if (rel->partial_pathlist == NIL)
return;
/*
- * The output of Gather is currently always unsorted, so there's only one
- * partial path of interest: the cheapest one. That will be the one at
- * the front of partial_pathlist because of the way add_partial_path
- * works.
- *
- * Eventually, we should have a Gather Merge operation that can merge
- * multiple tuple streams together while preserving their ordering. We
- * could usefully generate such a path from each partial path that has
- * non-NIL pathkeys.
+ * The output of Gather is always unsorted, so there's only one partial
+ * path of interest: the cheapest one. That will be the one at the front
+ * of partial_pathlist because of the way add_partial_path works.
*/
cheapest_partial_path = linitial(rel->partial_pathlist);
simple_gather_path = (Path *)
create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
NULL, NULL);
add_path(rel, simple_gather_path);
+
+ /*
+ * For each useful ordering, we can consider an order-preserving Gather
+ * Merge.
+ */
+ foreach (lc, rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ GatherMergePath *path;
+
+ if (subpath->pathkeys == NIL)
+ continue;
+
+ path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+ subpath->pathkeys, NULL, NULL);
+ add_path(rel, &path->path);
+ }
}
/*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 458f139..8331fb3 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (rows)
+ path->path.rows = *rows;
+ else if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = rel->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Add one to the number of workers to account for the leader. This might
+ * be overgenerous since the leader will do less work than other workers
+ * in typical cases, but we'll go with it for now.
+ */
+ Assert(path->num_workers > 0);
+ N = (double) path->num_workers + 1;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. Since Gather Merge, unlike
+ * Gather, requires us to block until a tuple is available from every
+ * worker, we bump the IPC cost up a little bit as compared with Gather.
+ * For lack of a better idea, charge an extra 5%.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index fae1f67..f3c6391 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -272,6 +272,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
/*
@@ -469,6 +471,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -1439,6 +1445,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
}
/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather Merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ /* As with Gather, it's best to project away columns in the workers. */
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ /* See create_merge_append_plan for why there's no make_xxx function */
+ gm_plan = makeNode(GatherMerge);
+ gm_plan->plan.targetlist = subplan->targetlist;
+ gm_plan->num_workers = best_path->num_workers;
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ /* Gather Merge is pointless with no pathkeys; use Gather instead. */
+ Assert(pathkeys != NIL);
+
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /* As for MergeAppend, check that we got the same sort key information. */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ /* Now insert the subplan under GatherMerge. */
+ gm_plan->plan.lefttree = subplan;
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
+
+/*
* create_projection_plan
*
* Create a plan tree to do a projection step and (recursively) plans
@@ -2277,7 +2363,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
-
/*****************************************************************************
*
* BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 4b5902f..6e408cd 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3712,8 +3712,7 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We can do this using either Gather or Gather Merge.
*/
if (grouped_rel->partial_pathlist)
{
@@ -3760,6 +3759,70 @@ create_grouping_paths(PlannerInfo *root,
parse->groupClause,
(List *) parse->havingQual,
dNumGroups));
+
+ /*
+ * The point of using Gather Merge rather than Gather is that it
+ * can preserve the ordering of the input path, so there's no
+ * reason to try it unless (1) it's possible to produce more than
+ * one output row and (2) we want the output path to be ordered.
+ */
+ if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *gmpath;
+ double total_groups;
+
+ /*
+ * It's useful to consider paths that are already properly
+ * ordered for Gather Merge, because those don't need a
+ * sort. It's also useful to consider the cheapest path,
+ * because sorting it in parallel and then doing Gather
+ * Merge may be better than doing an unordered Gather
+ * followed by a sort. But there's no point in
+ * considering non-cheapest paths that aren't already
+ * sorted correctly.
+ */
+ if (path != subpath &&
+ !pathkeys_contained_in(root->group_pathkeys,
+ subpath->pathkeys))
+ continue;
+
+ total_groups = subpath->rows * subpath->parallel_workers;
+
+ gmpath = (Path *)
+ create_gather_merge_path(root,
+ grouped_rel,
+ subpath,
+ NULL,
+ root->group_pathkeys,
+ NULL,
+ &total_groups);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
}
}
@@ -3857,6 +3920,16 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * We've been using the partial pathlist for the grouped relation to hold
+ * partially aggregated paths, but that's actually a little bit bogus
+ * because it's unsafe for later planning stages -- like ordered_rel ---
+ * to get the idea that they can use these partial paths as if they didn't
+ * need a FinalizeAggregate step. Zap the partial pathlist at this stage
+ * so we don't get confused.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4326,6 +4399,56 @@ create_ordered_paths(PlannerInfo *root,
}
/*
+ * generate_gather_paths() will have already generated a simple Gather
+ * path for the best parallel path, if any, and the loop above will have
+ * considered sorting it. Similarly, generate_gather_paths() will also
+ * have generated order-preserving Gather Merge plans which can be used
+ * without sorting if they happen to match the sort_pathkeys, and the loop
+ * above will have handled those as well. However, there's one more
+ * possibility: it may make sense to sort the cheapest partial path
+ * according to the required output order and then use Gather Merge.
+ */
+ if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+ input_rel->partial_pathlist != NIL)
+ {
+ Path *cheapest_partial_path;
+
+ cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+ /*
+ * If cheapest partial path doesn't need a sort, this is redundant
+ * with what's already been tried.
+ */
+ if (!pathkeys_contained_in(root->sort_pathkeys,
+ cheapest_partial_path->pathkeys))
+ {
+ Path *path;
+ double total_groups;
+
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ cheapest_partial_path,
+ root->sort_pathkeys,
+ limit_tuples);
+
+ total_groups = cheapest_partial_path->rows *
+ cheapest_partial_path->parallel_workers;
+ path = (Path *)
+ create_gather_merge_path(root, ordered_rel,
+ path,
+ target, root->sort_pathkeys, NULL,
+ &total_groups);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+ }
+
+ /*
* If there is an FDW that's responsible for all baserels of the query,
* let it consider adding ForeignPaths.
*/
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index be267b9..cc1c66e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 9fc7489..a0c0cd8 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2686,6 +2686,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index f440875..29aaa73 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer, double *rows)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost, rows);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5f43b1e..130f747 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -895,6 +895,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..f4dfb7a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2004,6 +2004,35 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan which produces sorted output in each worker, and then
+ * merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per
+ * reader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index fa4932a..4c2ce74 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -76,6 +76,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -125,6 +126,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -246,6 +248,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_ProjectSetPath,
T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f72f7a8..8dbce7a 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -785,6 +785,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 643be54..291318e 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1203,6 +1203,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 39376ec..7ceb4ca 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 7b41317..e0ab894 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -76,6 +76,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
extern GatherPath *create_gather_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath, PathTarget *target,
Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer,
+ double *rows);
extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath,
List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index 56481de..f739b22 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
CREATE TABLE foo2(fooid int, f2 int);
INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 993880d..5633386 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -777,6 +777,9 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
Gene
GenericCosts
GenericExprState
Due to recent below commit, patch not getting apply cleanly on
master branch.
commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500
Add a regression test script dedicated to exercising system views.
Please find attached latest patch.
On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
I am sorry for the delay, here is the latest re-based patch.
my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 msThis patch also fix that issue.
On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael--
Rushabh Lathia
--
Rushabh Lathia
Attachments:
gather-merge-v7.patchbinary/octet-stream; name=gather-merge-v7.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fb5d647..6959b51 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3496,6 +3496,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
</listitem>
</varlistentry>
+ <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+ <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+ <indexterm>
+ <primary><varname>enable_gathermerge</> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Enables or disables the query planner's use of gather
+ merge plan types. The default is <literal>on</>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
<term><varname>enable_hashagg</varname> (<type>boolean</type>)
<indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a67be0..6ac9ed8 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -905,6 +905,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Gather:
pname = sname = "Gather";
break;
+ case T_GatherMerge:
+ pname = sname = "Gather Merge";
+ break;
case T_IndexScan:
pname = sname = "Index Scan";
break;
@@ -1394,6 +1397,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
ExplainPropertyBool("Single Copy", gather->single_copy, es);
}
break;
+ case T_GatherMerge:
+ {
+ GatherMerge *gm = (GatherMerge *) plan;
+
+ show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+ if (plan->qual)
+ show_instrumentation_count("Rows Removed by Filter", 1,
+ planstate, es);
+ ExplainPropertyInteger("Workers Planned",
+ gm->num_workers, es);
+ if (es->analyze)
+ {
+ int nworkers;
+
+ nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+ ExplainPropertyInteger("Workers Launched",
+ nworkers, es);
+ }
+ }
+ break;
case T_FunctionScan:
if (es->verbose)
{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 2a2b7eb..c95747e 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
nodeCustom.o nodeFunctionscan.o nodeGather.o \
nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
- nodeLimit.o nodeLockRows.o \
+ nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 0dd95c6..f00496b 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
#include "executor/nodeForeignscan.h"
#include "executor/nodeFunctionscan.h"
#include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
#include "executor/nodeGroup.h"
#include "executor/nodeHash.h"
#include "executor/nodeHashjoin.h"
@@ -320,6 +321,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_GatherMerge:
+ result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+ estate, eflags);
+ break;
+
case T_Hash:
result = (PlanState *) ExecInitHash((Hash *) node,
estate, eflags);
@@ -525,6 +531,10 @@ ExecProcNode(PlanState *node)
result = ExecGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ result = ExecGatherMerge((GatherMergeState *) node);
+ break;
+
case T_HashState:
result = ExecHash((HashState *) node);
break;
@@ -687,6 +697,10 @@ ExecEndNode(PlanState *node)
ExecEndGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecEndGatherMerge((GatherMergeState *) node);
+ break;
+
case T_IndexScanState:
ExecEndIndexScan((IndexScanState *) node);
break;
@@ -820,6 +834,9 @@ ExecShutdownNode(PlanState *node)
case T_GatherState:
ExecShutdownGather((GatherState *) node);
break;
+ case T_GatherMergeState:
+ ExecShutdownGatherMerge((GatherMergeState *) node);
+ break;
default:
break;
}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..84c1677
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ * Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+ HeapTuple *tuple;
+ int readCounter;
+ int nTuples;
+ bool done;
+} GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+ bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+ bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ * ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+ GatherMergeState *gm_state;
+ Plan *outerNode;
+ bool hasoid;
+ TupleDesc tupDesc;
+
+ /* Gather merge node doesn't have innerPlan node. */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * create state structure
+ */
+ gm_state = makeNode(GatherMergeState);
+ gm_state->ps.plan = (Plan *) node;
+ gm_state->ps.state = estate;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &gm_state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ gm_state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) gm_state);
+ gm_state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) gm_state);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+ /*
+ * now initialize outer plan
+ */
+ outerNode = outerPlan(node);
+ outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gm_state->ps);
+ ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+ gm_state->gm_initialized = false;
+
+ /*
+ * initialize sort-key information
+ */
+ if (node->numCols)
+ {
+ int i;
+
+ gm_state->gm_nkeys = node->numCols;
+ gm_state->gm_sortkeys =
+ palloc0(sizeof(SortSupportData) * node->numCols);
+
+ for (i = 0; i < node->numCols; i++)
+ {
+ SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+ sortKey->ssup_cxt = CurrentMemoryContext;
+ sortKey->ssup_collation = node->collations[i];
+ sortKey->ssup_nulls_first = node->nullsFirst[i];
+ sortKey->ssup_attno = node->sortColIdx[i];
+
+ /*
+ * We don't perform abbreviated key conversion here, for the same
+ * reasons that it isn't used in MergeAppend
+ */
+ sortKey->abbreviate = false;
+
+ PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+ }
+ }
+
+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initializing the gather merge slots.
+ */
+ if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+ hasoid = false;
+ tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+ gm_state->tupDesc = tupDesc;
+
+ return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecGatherMerge(node)
+ *
+ * Scans the relation via multiple workers and returns
+ * the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+ TupleTableSlot *slot;
+ ExprContext *econtext;
+ int i;
+
+ /*
+ * As with Gather, we don't launch workers until this node is actually
+ * executed.
+ */
+ if (!node->initialized)
+ {
+ EState *estate = node->ps.state;
+ GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we might have to run without parallelism; but if parallel
+ * mode is active then we can try to fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+ {
+ ParallelContext *pcxt;
+
+ /* Initialize data structures for workers. */
+ if (!node->pei)
+ node->pei = ExecInitParallelPlan(node->ps.lefttree,
+ estate,
+ gm->num_workers);
+
+ /* Try to launch workers. */
+ pcxt = node->pei->pcxt;
+ LaunchParallelWorkers(pcxt);
+ node->nworkers_launched = pcxt->nworkers_launched;
+
+ /* Set up tuple queue readers to read the results. */
+ if (pcxt->nworkers_launched > 0)
+ {
+ node->nreaders = 0;
+ node->reader = palloc(pcxt->nworkers_launched *
+ sizeof(TupleQueueReader *));
+
+ Assert(gm->numCols);
+
+ for (i = 0; i < pcxt->nworkers_launched; ++i)
+ {
+ shm_mq_set_handle(node->pei->tqueue[i],
+ pcxt->worker[i].bgwhandle);
+ node->reader[node->nreaders++] =
+ CreateTupleQueueReader(node->pei->tqueue[i],
+ node->tupDesc);
+ }
+ }
+ else
+ {
+ /* No workers? Then never mind. */
+ ExecShutdownGatherMergeWorkers(node);
+ }
+ }
+
+ /* always allow leader to participate */
+ node->need_to_scan_locally = true;
+ node->initialized = true;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle.
+ */
+ econtext = node->ps.ps_ExprContext;
+ ResetExprContext(econtext);
+
+ /*
+ * Get next tuple, either from one of our workers, or by running the
+ * plan ourselves.
+ */
+ slot = gather_merge_getnext(node);
+ if (TupIsNull(slot))
+ return NULL;
+
+ /*
+ * form the result tuple using ExecProject(), and return it --- unless
+ * the projection produces an empty set, in which case we must loop
+ * back around for another tuple
+ */
+ econtext->ecxt_outertuple = slot;
+ return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMerge(node);
+ ExecFreeExprContext(&node->ps);
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMerge
+ *
+ * Destroy the setup for parallel workers including parallel context.
+ * Collect all the stats after workers are stopped, else some work
+ * done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+ ExecShutdownGatherMergeWorkers(node);
+
+ /* Now destroy the parallel context. */
+ if (node->pei != NULL)
+ {
+ ExecParallelCleanup(node->pei);
+ node->pei = NULL;
+ }
+}
+
+/* ----------------------------------------------------------------
+ * ExecShutdownGatherMergeWorkers
+ *
+ * Destroy the parallel workers. Collect all the stats after
+ * workers are stopped, else some work done by workers won't be
+ * accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+ /* Shut down tuple queue readers before shutting down workers. */
+ if (node->reader != NULL)
+ {
+ int i;
+
+ for (i = 0; i < node->nreaders; ++i)
+ if (node->reader[i])
+ DestroyTupleQueueReader(node->reader[i]);
+
+ pfree(node->reader);
+ node->reader = NULL;
+ }
+
+ /* Now shut down the workers. */
+ if (node->pei != NULL)
+ ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ * ExecReScanGatherMerge
+ *
+ * Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+ /*
+ * Re-initialize the parallel workers to perform rescan of relation. We
+ * want to gracefully shutdown all the workers so that they should be able
+ * to propagate any error or other information to master backend before
+ * dying. Parallel context will be reused for rescan.
+ */
+ ExecShutdownGatherMergeWorkers(node);
+
+ node->initialized = false;
+
+ if (node->pei)
+ ExecParallelReinitialize(node->pei);
+
+ ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+ int nreaders = gm_state->nreaders;
+ bool initialize = true;
+ int i;
+
+ /*
+ * Allocate gm_slots for the number of worker + one more slot for leader.
+ * Last slot is always for leader. Leader always calls ExecProcNode() to
+ * read the tuple which will return the TupleTableSlot. Later it will
+ * directly get assigned to gm_slot. So just initialize leader gm_slot
+ * with NULL. For other slots below code will call
+ * ExecInitExtraTupleSlot() which will do the initialization of worker
+ * slots.
+ */
+ gm_state->gm_slots =
+ palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+ gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+ /* Initialize the tuple slot and tuple array for each worker */
+ gm_state->gm_tuple_buffers =
+ (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+ (gm_state->nreaders + 1));
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ /* Allocate the tuple array with MAX_TUPLE_STORE size */
+ gm_state->gm_tuple_buffers[i].tuple =
+ (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+ /* Initialize slot for worker */
+ gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+ ExecSetSlotDescriptor(gm_state->gm_slots[i],
+ gm_state->tupDesc);
+ }
+
+ /* Allocate the resources for the merge */
+ gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+ heap_compare_slots,
+ gm_state);
+
+ /*
+ * First, try to read a tuple from each worker (including leader) in
+ * nowait mode, so that we initialize read from each worker as well as
+ * leader. After this, if all active workers are unable to produce a
+ * tuple, then re-read and this time use wait mode. For workers that were
+ * able to produce a tuple in the earlier loop and are still active, just
+ * try to fill the tuple array if more tuples are avaiable.
+ */
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ {
+ if (gather_merge_readnext(gm_state, i, initialize))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ form_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (!gm_state->gm_tuple_buffers[i].done &&
+ (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty))
+ goto reread;
+
+ binaryheap_build(gm_state->gm_heap);
+ gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+ int i;
+
+ for (i = 0; i < gm_state->nreaders; i++)
+ {
+ pfree(gm_state->gm_tuple_buffers[i].tuple);
+ gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+ }
+
+ /* Free tuple array as we don't need it any more */
+ pfree(gm_state->gm_tuple_buffers);
+ /* Free the binaryheap, which was created for sort */
+ binaryheap_free(gm_state->gm_heap);
+
+ /* return any clear slot */
+ return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+ int i;
+
+ /*
+ * First time through: pull the first tuple from each participate, and set
+ * up the heap.
+ */
+ if (gm_state->gm_initialized == false)
+ gather_merge_init(gm_state);
+ else
+ {
+ /*
+ * Otherwise, pull the next tuple from whichever participant we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing
+ * elements of the heap.
+ */
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+ if (gather_merge_readnext(gm_state, i, false))
+ binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+ else
+ (void) binaryheap_remove_first(gm_state->gm_heap);
+ }
+
+ if (binaryheap_empty(gm_state->gm_heap))
+ {
+ /* All the queues are exhausted, and so is the heap */
+ return gather_merge_clear_slots(gm_state);
+ }
+ else
+ {
+ i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+ return gm_state->gm_slots[i];
+ }
+
+ return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+ GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ int i;
+
+ /* Last slot is for leader and we don't build tuple array for leader */
+ if (reader == gm_state->nreaders)
+ return;
+
+ /*
+ * We here because we already read all the tuples from the tuple array, so
+ * initialize the counter to zero.
+ */
+ if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+ tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+ /* Tuple array is already full? */
+ if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+ return;
+
+ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+ {
+ tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ false,
+ &tuple_buffer->done));
+ if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+ break;
+ tuple_buffer->nTuples++;
+ }
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+ GMReaderTupleBuffer *tuple_buffer;
+ HeapTuple tup = NULL;
+
+ /*
+ * If we're being asked to generate a tuple from the leader, then we
+ * just call ExecProcNode as normal to produce one.
+ */
+ if (gm_state->nreaders == reader)
+ {
+ if (gm_state->need_to_scan_locally)
+ {
+ PlanState *outerPlan = outerPlanState(gm_state);
+ TupleTableSlot *outerTupleSlot;
+
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (!TupIsNull(outerTupleSlot))
+ {
+ gm_state->gm_slots[reader] = outerTupleSlot;
+ return true;
+ }
+ gm_state->gm_tuple_buffers[reader].done = true;
+ gm_state->need_to_scan_locally = false;
+ }
+ return false;
+ }
+
+ /* Otherwise, check the state of the relevant tuple buffer. */
+ tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+ if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+ {
+ /* Return any tuple previously read that is still buffered. */
+ tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+ tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+ }
+ else if (tuple_buffer->done)
+ {
+ /* Reader is known to be exhausted. */
+ DestroyTupleQueueReader(gm_state->reader[reader]);
+ gm_state->reader[reader] = NULL;
+ return false;
+ }
+ else
+ {
+ /* Read and buffer next tuple. */
+ tup = heap_copytuple(gm_readnext_tuple(gm_state,
+ reader,
+ nowait,
+ &tuple_buffer->done));
+
+ /*
+ * Attempt to read more tuples in nowait mode and store them in
+ * the tuple array.
+ */
+ if (HeapTupleIsValid(tup))
+ form_tuple_array(gm_state, reader);
+ else
+ return false;
+ }
+
+ Assert(HeapTupleIsValid(tup));
+
+ /* Build the TupleTableSlot for the given tuple */
+ ExecStoreTuple(tup, /* tuple to store */
+ gm_state->gm_slots[reader], /* slot in which to store the
+ * tuple */
+ InvalidBuffer, /* buffer associated with this tuple */
+ true); /* pfree this pointer if not from heap */
+
+ return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+ bool *done)
+{
+ TupleQueueReader *reader;
+ HeapTuple tup = NULL;
+ MemoryContext oldContext;
+ MemoryContext tupleContext;
+
+ tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+ if (done != NULL)
+ *done = false;
+
+ /* Check for async events, particularly messages from workers. */
+ CHECK_FOR_INTERRUPTS();
+
+ /* Attempt to read a tuple. */
+ reader = gm_state->reader[nreader];
+
+ /* Run TupleQueueReaders in per-tuple context */
+ oldContext = MemoryContextSwitchTo(tupleContext);
+ tup = TupleQueueReaderNext(reader, nowait, done);
+ MemoryContextSwitchTo(oldContext);
+
+ return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array. We use SlotNumber
+ * to store slot indexes. This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+ GatherMergeState *node = (GatherMergeState *) arg;
+ SlotNumber slot1 = DatumGetInt32(a);
+ SlotNumber slot2 = DatumGetInt32(b);
+
+ TupleTableSlot *s1 = node->gm_slots[slot1];
+ TupleTableSlot *s2 = node->gm_slots[slot2];
+ int nkey;
+
+ Assert(!TupIsNull(s1));
+ Assert(!TupIsNull(s2));
+
+ for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+ {
+ SortSupport sortKey = node->gm_sortkeys + nkey;
+ AttrNumber attno = sortKey->ssup_attno;
+ Datum datum1,
+ datum2;
+ bool isNull1,
+ isNull2;
+ int compare;
+
+ datum1 = slot_getattr(s1, attno, &isNull1);
+ datum2 = slot_getattr(s2, attno, &isNull2);
+
+ compare = ApplySortComparator(datum1, isNull1,
+ datum2, isNull2,
+ sortKey);
+ if (compare != 0)
+ return -compare;
+ }
+ return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 30d733e..943f495 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -359,6 +359,31 @@ _copyGather(const Gather *from)
return newnode;
}
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+ GatherMerge *newnode = makeNode(GatherMerge);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ /*
+ * copy remainder of node
+ */
+ COPY_SCALAR_FIELD(num_workers);
+ COPY_SCALAR_FIELD(numCols);
+ COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+ COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+ COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+ return newnode;
+}
/*
* CopyScanFields
@@ -4521,6 +4546,9 @@ copyObject(const void *from)
case T_Gather:
retval = _copyGather(from);
break;
+ case T_GatherMerge:
+ retval = _copyGatherMerge(from);
+ break;
case T_SeqScan:
retval = _copySeqScan(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 1560ac3..865ab5f 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -457,6 +457,35 @@ _outGather(StringInfo str, const Gather *node)
}
static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+ int i;
+
+ WRITE_NODE_TYPE("GATHERMERGE");
+
+ _outPlanInfo(str, (const Plan *) node);
+
+ WRITE_INT_FIELD(num_workers);
+ WRITE_INT_FIELD(numCols);
+
+ appendStringInfoString(str, " :sortColIdx");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+ appendStringInfoString(str, " :sortOperators");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->sortOperators[i]);
+
+ appendStringInfoString(str, " :collations");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %u", node->collations[i]);
+
+ appendStringInfoString(str, " :nullsFirst");
+ for (i = 0; i < node->numCols; i++)
+ appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
_outScan(StringInfo str, const Scan *node)
{
WRITE_NODE_TYPE("SCAN");
@@ -1984,6 +2013,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
}
static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+ WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+ _outPathInfo(str, (const Path *) node);
+
+ WRITE_NODE_FIELD(subpath);
+ WRITE_INT_FIELD(num_workers);
+}
+
+static void
_outNestPath(StringInfo str, const NestPath *node)
{
WRITE_NODE_TYPE("NESTPATH");
@@ -3409,6 +3449,9 @@ outNode(StringInfo str, const void *obj)
case T_Gather:
_outGather(str, obj);
break;
+ case T_GatherMerge:
+ _outGatherMerge(str, obj);
+ break;
case T_Scan:
_outScan(str, obj);
break;
@@ -3739,6 +3782,9 @@ outNode(StringInfo str, const void *obj)
case T_LimitPath:
_outLimitPath(str, obj);
break;
+ case T_GatherMergePath:
+ _outGatherMergePath(str, obj);
+ break;
case T_NestPath:
_outNestPath(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dcfa6ee..8dabde6 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2095,6 +2095,26 @@ _readGather(void)
}
/*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+ READ_LOCALS(GatherMerge);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_INT_FIELD(num_workers);
+ READ_INT_FIELD(numCols);
+ READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+ READ_OID_ARRAY(sortOperators, local_node->numCols);
+ READ_OID_ARRAY(collations, local_node->numCols);
+ READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+ READ_DONE();
+}
+
+/*
* _readHash
*/
static Hash *
@@ -2529,6 +2549,8 @@ parseNodeString(void)
return_value = _readUnique();
else if (MATCH("GATHER", 6))
return_value = _readGather();
+ else if (MATCH("GATHERMERGE", 11))
+ return_value = _readGatherMerge();
else if (MATCH("HASH", 4))
return_value = _readHash();
else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 5c18987..824f09e 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2047,39 +2047,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
/*
* generate_gather_paths
- * Generate parallel access paths for a relation by pushing a Gather on
- * top of a partial path.
+ * Generate parallel access paths for a relation by pushing a Gather or
+ * Gather Merge on top of a partial path.
*
* This must not be called until after we're done creating all partial paths
* for the specified relation. (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
*/
void
generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
{
Path *cheapest_partial_path;
Path *simple_gather_path;
+ ListCell *lc;
/* If there are no partial paths, there's nothing to do here. */
if (rel->partial_pathlist == NIL)
return;
/*
- * The output of Gather is currently always unsorted, so there's only one
- * partial path of interest: the cheapest one. That will be the one at
- * the front of partial_pathlist because of the way add_partial_path
- * works.
- *
- * Eventually, we should have a Gather Merge operation that can merge
- * multiple tuple streams together while preserving their ordering. We
- * could usefully generate such a path from each partial path that has
- * non-NIL pathkeys.
+ * The output of Gather is always unsorted, so there's only one partial
+ * path of interest: the cheapest one. That will be the one at the front
+ * of partial_pathlist because of the way add_partial_path works.
*/
cheapest_partial_path = linitial(rel->partial_pathlist);
simple_gather_path = (Path *)
create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
NULL, NULL);
add_path(rel, simple_gather_path);
+
+ /*
+ * For each useful ordering, we can consider an order-preserving Gather
+ * Merge.
+ */
+ foreach (lc, rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ GatherMergePath *path;
+
+ if (subpath->pathkeys == NIL)
+ continue;
+
+ path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+ subpath->pathkeys, NULL, NULL);
+ add_path(rel, &path->path);
+ }
}
/*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index a43daa7..832d0ae 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool enable_nestloop = true;
bool enable_material = true;
bool enable_mergejoin = true;
bool enable_hashjoin = true;
+bool enable_gathermerge = true;
typedef struct
{
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
}
/*
+ * cost_gather_merge
+ * Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows)
+{
+ Cost startup_cost = 0;
+ Cost run_cost = 0;
+ Cost comparison_cost;
+ double N;
+ double logN;
+
+ /* Mark the path with the correct row estimate */
+ if (rows)
+ path->path.rows = *rows;
+ else if (param_info)
+ path->path.rows = param_info->ppi_rows;
+ else
+ path->path.rows = rel->rows;
+
+ if (!enable_gathermerge)
+ startup_cost += disable_cost;
+
+ /*
+ * Add one to the number of workers to account for the leader. This might
+ * be overgenerous since the leader will do less work than other workers
+ * in typical cases, but we'll go with it for now.
+ */
+ Assert(path->num_workers > 0);
+ N = (double) path->num_workers + 1;
+ logN = LOG2(N);
+
+ /* Assumed cost per tuple comparison */
+ comparison_cost = 2.0 * cpu_operator_cost;
+
+ /* Heap creation cost */
+ startup_cost += comparison_cost * N * logN;
+
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * logN;
+
+ /* small cost for heap management, like cost_merge_append */
+ run_cost += cpu_operator_cost * path->path.rows;
+
+ /*
+ * Parallel setup and communication cost. Since Gather Merge, unlike
+ * Gather, requires us to block until a tuple is available from every
+ * worker, we bump the IPC cost up a little bit as compared with Gather.
+ * For lack of a better idea, charge an extra 5%.
+ */
+ startup_cost += parallel_setup_cost;
+ run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+ path->path.startup_cost = startup_cost + input_startup_cost;
+ path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
* cost_index
* Determines and returns the cost of scanning a relation using an index.
*
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index fae1f67..f3c6391 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -272,6 +272,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
List *resultRelations, List *subplans,
List *withCheckOptionLists, List *returningLists,
List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+ GatherMergePath *best_path);
/*
@@ -469,6 +471,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(LimitPath *) best_path,
flags);
break;
+ case T_GatherMerge:
+ plan = (Plan *) create_gather_merge_plan(root,
+ (GatherMergePath *) best_path);
+ break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) best_path->pathtype);
@@ -1439,6 +1445,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
}
/*
+ * create_gather_merge_plan
+ *
+ * Create a Gather Merge plan for 'best_path' and (recursively)
+ * plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+ GatherMerge *gm_plan;
+ Plan *subplan;
+ List *pathkeys = best_path->path.pathkeys;
+ int numsortkeys;
+ AttrNumber *sortColIdx;
+ Oid *sortOperators;
+ Oid *collations;
+ bool *nullsFirst;
+
+ /* As with Gather, it's best to project away columns in the workers. */
+ subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+ /* See create_merge_append_plan for why there's no make_xxx function */
+ gm_plan = makeNode(GatherMerge);
+ gm_plan->plan.targetlist = subplan->targetlist;
+ gm_plan->num_workers = best_path->num_workers;
+ copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+ /* Gather Merge is pointless with no pathkeys; use Gather instead. */
+ Assert(pathkeys != NIL);
+
+ /* Compute sort column info, and adjust GatherMerge tlist as needed */
+ (void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+ best_path->path.parent->relids,
+ NULL,
+ true,
+ &gm_plan->numCols,
+ &gm_plan->sortColIdx,
+ &gm_plan->sortOperators,
+ &gm_plan->collations,
+ &gm_plan->nullsFirst);
+
+
+ /* Compute sort column info, and adjust subplan's tlist as needed */
+ subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+ best_path->subpath->parent->relids,
+ gm_plan->sortColIdx,
+ false,
+ &numsortkeys,
+ &sortColIdx,
+ &sortOperators,
+ &collations,
+ &nullsFirst);
+
+ /* As for MergeAppend, check that we got the same sort key information. */
+ Assert(numsortkeys == gm_plan->numCols);
+ if (memcmp(sortColIdx, gm_plan->sortColIdx,
+ numsortkeys * sizeof(AttrNumber)) != 0)
+ elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+ Assert(memcmp(sortOperators, gm_plan->sortOperators,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(collations, gm_plan->collations,
+ numsortkeys * sizeof(Oid)) == 0);
+ Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+ numsortkeys * sizeof(bool)) == 0);
+
+ /* Now, insert a Sort node if subplan isn't sufficiently ordered */
+ if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+ subplan = (Plan *) make_sort(subplan, numsortkeys,
+ sortColIdx, sortOperators,
+ collations, nullsFirst);
+
+ /* Now insert the subplan under GatherMerge. */
+ gm_plan->plan.lefttree = subplan;
+
+ /* use parallel mode for parallel plans. */
+ root->glob->parallelModeNeeded = true;
+
+ return gm_plan;
+}
+
+/*
* create_projection_plan
*
* Create a plan tree to do a projection step and (recursively) plans
@@ -2277,7 +2363,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
return plan;
}
-
/*****************************************************************************
*
* BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 4b5902f..6e408cd 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3712,8 +3712,7 @@ create_grouping_paths(PlannerInfo *root,
/*
* Now generate a complete GroupAgg Path atop of the cheapest partial
- * path. We need only bother with the cheapest path here, as the
- * output of Gather is never sorted.
+ * path. We can do this using either Gather or Gather Merge.
*/
if (grouped_rel->partial_pathlist)
{
@@ -3760,6 +3759,70 @@ create_grouping_paths(PlannerInfo *root,
parse->groupClause,
(List *) parse->havingQual,
dNumGroups));
+
+ /*
+ * The point of using Gather Merge rather than Gather is that it
+ * can preserve the ordering of the input path, so there's no
+ * reason to try it unless (1) it's possible to produce more than
+ * one output row and (2) we want the output path to be ordered.
+ */
+ if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+ {
+ foreach(lc, grouped_rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *gmpath;
+ double total_groups;
+
+ /*
+ * It's useful to consider paths that are already properly
+ * ordered for Gather Merge, because those don't need a
+ * sort. It's also useful to consider the cheapest path,
+ * because sorting it in parallel and then doing Gather
+ * Merge may be better than doing an unordered Gather
+ * followed by a sort. But there's no point in
+ * considering non-cheapest paths that aren't already
+ * sorted correctly.
+ */
+ if (path != subpath &&
+ !pathkeys_contained_in(root->group_pathkeys,
+ subpath->pathkeys))
+ continue;
+
+ total_groups = subpath->rows * subpath->parallel_workers;
+
+ gmpath = (Path *)
+ create_gather_merge_path(root,
+ grouped_rel,
+ subpath,
+ NULL,
+ root->group_pathkeys,
+ NULL,
+ &total_groups);
+
+ if (parse->hasAggs)
+ add_path(grouped_rel, (Path *)
+ create_agg_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+ AGGSPLIT_FINAL_DESERIAL,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ &agg_final_costs,
+ dNumGroups));
+ else
+ add_path(grouped_rel, (Path *)
+ create_group_path(root,
+ grouped_rel,
+ gmpath,
+ target,
+ parse->groupClause,
+ (List *) parse->havingQual,
+ dNumGroups));
+ }
+ }
}
}
@@ -3857,6 +3920,16 @@ create_grouping_paths(PlannerInfo *root,
/* Now choose the best path(s) */
set_cheapest(grouped_rel);
+ /*
+ * We've been using the partial pathlist for the grouped relation to hold
+ * partially aggregated paths, but that's actually a little bit bogus
+ * because it's unsafe for later planning stages -- like ordered_rel ---
+ * to get the idea that they can use these partial paths as if they didn't
+ * need a FinalizeAggregate step. Zap the partial pathlist at this stage
+ * so we don't get confused.
+ */
+ grouped_rel->partial_pathlist = NIL;
+
return grouped_rel;
}
@@ -4326,6 +4399,56 @@ create_ordered_paths(PlannerInfo *root,
}
/*
+ * generate_gather_paths() will have already generated a simple Gather
+ * path for the best parallel path, if any, and the loop above will have
+ * considered sorting it. Similarly, generate_gather_paths() will also
+ * have generated order-preserving Gather Merge plans which can be used
+ * without sorting if they happen to match the sort_pathkeys, and the loop
+ * above will have handled those as well. However, there's one more
+ * possibility: it may make sense to sort the cheapest partial path
+ * according to the required output order and then use Gather Merge.
+ */
+ if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+ input_rel->partial_pathlist != NIL)
+ {
+ Path *cheapest_partial_path;
+
+ cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+ /*
+ * If cheapest partial path doesn't need a sort, this is redundant
+ * with what's already been tried.
+ */
+ if (!pathkeys_contained_in(root->sort_pathkeys,
+ cheapest_partial_path->pathkeys))
+ {
+ Path *path;
+ double total_groups;
+
+ path = (Path *) create_sort_path(root,
+ ordered_rel,
+ cheapest_partial_path,
+ root->sort_pathkeys,
+ limit_tuples);
+
+ total_groups = cheapest_partial_path->rows *
+ cheapest_partial_path->parallel_workers;
+ path = (Path *)
+ create_gather_merge_path(root, ordered_rel,
+ path,
+ target, root->sort_pathkeys, NULL,
+ &total_groups);
+
+ /* Add projection step if needed */
+ if (path->pathtarget != target)
+ path = apply_projection_to_path(root, ordered_rel,
+ path, target);
+
+ add_path(ordered_rel, path);
+ }
+ }
+
+ /*
* If there is an FDW that's responsible for all baserels of the query,
* let it consider adding ForeignPaths.
*/
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index be267b9..cc1c66e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
break;
case T_Gather:
+ case T_GatherMerge:
set_upper_references(root, plan, rtoffset);
break;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 9fc7489..a0c0cd8 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2686,6 +2686,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
case T_Sort:
case T_Unique:
case T_Gather:
+ case T_GatherMerge:
case T_SetOp:
case T_Group:
/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index f440875..29aaa73 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
}
/*
+ * create_gather_merge_path
+ *
+ * Creates a path corresponding to a gather merge scan, returning
+ * the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+ PathTarget *target, List *pathkeys,
+ Relids required_outer, double *rows)
+{
+ GatherMergePath *pathnode = makeNode(GatherMergePath);
+ Cost input_startup_cost = 0;
+ Cost input_total_cost = 0;
+
+ Assert(subpath->parallel_safe);
+ Assert(pathkeys);
+
+ pathnode->path.pathtype = T_GatherMerge;
+ pathnode->path.parent = rel;
+ pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+ required_outer);
+ pathnode->path.parallel_aware = false;
+
+ pathnode->subpath = subpath;
+ pathnode->num_workers = subpath->parallel_workers;
+ pathnode->path.pathkeys = pathkeys;
+ pathnode->path.pathtarget = target ? target : rel->reltarget;
+ pathnode->path.rows += subpath->rows;
+
+ if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+ {
+ /* Subpath is adequately ordered, we won't need to sort it */
+ input_startup_cost += subpath->startup_cost;
+ input_total_cost += subpath->total_cost;
+ }
+ else
+ {
+ /* We'll need to insert a Sort node, so include cost for that */
+ Path sort_path; /* dummy for result of cost_sort */
+
+ cost_sort(&sort_path,
+ root,
+ pathkeys,
+ subpath->total_cost,
+ subpath->rows,
+ subpath->pathtarget->width,
+ 0.0,
+ work_mem,
+ -1);
+ input_startup_cost += sort_path.startup_cost;
+ input_total_cost += sort_path.total_cost;
+ }
+
+ cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+ input_startup_cost, input_total_cost, rows);
+
+ return pathnode;
+}
+
+/*
* translate_sub_tlist - get subquery column numbers represented by tlist
*
* The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 74ca4e7..0a110d8 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -895,6 +895,15 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
+ {
+ {"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+ gettext_noop("Enables the planner's use of gather merge plans."),
+ NULL
+ },
+ &enable_gathermerge,
+ true,
+ NULL, NULL, NULL
+ },
{
{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ * prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+ EState *estate,
+ int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..f4dfb7a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2004,6 +2004,35 @@ typedef struct GatherState
} GatherState;
/* ----------------
+ * GatherMergeState information
+ *
+ * Gather merge nodes launch 1 or more parallel workers, run a
+ * subplan which produces sorted output in each worker, and then
+ * merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+ PlanState ps; /* its first field is NodeTag */
+ bool initialized;
+ struct ParallelExecutorInfo *pei;
+ int nreaders;
+ int nworkers_launched;
+ struct TupleQueueReader **reader;
+ TupleDesc tupDesc;
+ TupleTableSlot **gm_slots;
+ struct binaryheap *gm_heap; /* binary heap of slot indices */
+ bool gm_initialized; /* gather merge initilized ? */
+ bool need_to_scan_locally;
+ int gm_nkeys;
+ SortSupport gm_sortkeys; /* array of length ms_nkeys */
+ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per
+ * reader */
+} GatherMergeState;
+
+/* ----------------
* HashState information
* ----------------
*/
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 95dd8ba..3530e41 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -76,6 +76,7 @@ typedef enum NodeTag
T_WindowAgg,
T_Unique,
T_Gather,
+ T_GatherMerge,
T_Hash,
T_SetOp,
T_LockRows,
@@ -125,6 +126,7 @@ typedef enum NodeTag
T_WindowAggState,
T_UniqueState,
T_GatherState,
+ T_GatherMergeState,
T_HashState,
T_SetOpState,
T_LockRowsState,
@@ -246,6 +248,7 @@ typedef enum NodeTag
T_MaterialPath,
T_UniquePath,
T_GatherPath,
+ T_GatherMergePath,
T_ProjectionPath,
T_ProjectSetPath,
T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f72f7a8..8dbce7a 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -785,6 +785,22 @@ typedef struct Gather
bool invisible; /* suppress EXPLAIN display (for testing)? */
} Gather;
+/* ------------
+ * gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+ Plan plan;
+ int num_workers;
+ /* remaining fields are just like the sort-key info in struct Sort */
+ int numCols; /* number of sort-key columns */
+ AttrNumber *sortColIdx; /* their indexes in the target list */
+ Oid *sortOperators; /* OIDs of operators to sort them by */
+ Oid *collations; /* OIDs of collations */
+ bool *nullsFirst; /* NULLS FIRST/LAST directions */
+} GatherMerge;
+
/* ----------------
* hash build node
*
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 643be54..291318e 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1203,6 +1203,19 @@ typedef struct GatherPath
} GatherPath;
/*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+ Path path;
+ Path *subpath; /* path for each worker */
+ int num_workers; /* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
* All join-type paths share these fields.
*/
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 0e68264..0856926 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
extern bool enable_material;
extern bool enable_mergejoin;
extern bool enable_hashjoin;
+extern bool enable_gathermerge;
extern int constraint_exclusion;
extern double clamp_row_est(double nrows);
@@ -200,5 +201,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
int varRelid,
JoinType jointype,
SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+ RelOptInfo *rel, ParamPathInfo *param_info,
+ Cost input_startup_cost, Cost input_total_cost,
+ double *rows);
#endif /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 7b41317..e0ab894 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -76,6 +76,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
extern GatherPath *create_gather_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath, PathTarget *target,
Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target,
+ List *pathkeys,
+ Relids required_outer,
+ double *rows);
extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
RelOptInfo *rel, Path *subpath,
List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out
index d48abd7..568b783 100644
--- a/src/test/regress/expected/sysviews.out
+++ b/src/test/regress/expected/sysviews.out
@@ -73,6 +73,7 @@ select name, setting from pg_settings where name like 'enable%';
name | setting
----------------------+---------
enable_bitmapscan | on
+ enable_gathermerge | on
enable_hashagg | on
enable_hashjoin | on
enable_indexonlyscan | on
@@ -83,7 +84,7 @@ select name, setting from pg_settings where name like 'enable%';
enable_seqscan | on
enable_sort | on
enable_tidscan | on
-(11 rows)
+(12 rows)
-- Test that the pg_timezone_names and pg_timezone_abbrevs views are
-- more-or-less working. We can't test their contents in any great detail
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index c4235ae..7251e2c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -777,6 +777,9 @@ GV
Gather
GatherPath
GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
Gene
GenericCosts
GenericExprState
Hi,
I have done some testing with the latest patch
1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;
*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GB
Test case script with out-file attached.
*LCOV Report :- *
File Names Line Coverage without Test cases Line Coverage with Test
cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %
On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:
Due to recent below commit, patch not getting apply cleanly on
master branch.commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500Add a regression test script dedicated to exercising system views.
Please find attached latest patch.
On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:I am sorry for the delay, here is the latest re-based patch.
my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 msThis patch also fix that issue.
On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael--
Rushabh Lathia--
Rushabh Lathia--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Regards,
Neha Sharma
Thanks Neha for the test LCOV report.
I run the tpch on scale 10 with the latest patch and with latest code
up to 1st Feb (f1169ab501ce90e035a7c6489013a1d4c250ac92).
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- power2 machine with 512GB of RAM
Here are the results: I did the three run and taken median. First
timing is without patch and 2nd is with GM.
Query 3: 45035.425 - 43935.497
Query 4: 7098.259 - 6651.498
Query 5: 37114.338 - 37605.579
Query 9: 87544.144 - 44617.138
Query 10: 43810.497 - 37133.404
Query 12: 20309.993 - 19639.213
Query 15: 61837.415 - 60240.762
Query 17: 134121.961 - 116943.542
Query 18: 248157.735 - 193463.311
Query 20: 203448.405 - 166733.112
Also attaching the output of those TPCH runs.
On Fri, Feb 3, 2017 at 5:56 PM, Neha Sharma <neha.sharma@enterprisedb.com>
wrote:
Hi,
I have done some testing with the latest patch
1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GBTest case script with out-file attached.
*LCOV Report :- *
File Names Line Coverage without Test cases Line Coverage with Test cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:Due to recent below commit, patch not getting apply cleanly on
master branch.commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500Add a regression test script dedicated to exercising system views.
Please find attached latest patch.
On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:I am sorry for the delay, here is the latest re-based patch.
my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;QUERY PLAN
------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 msThis patch also fix that issue.
On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael--
Rushabh Lathia--
Rushabh Lathia--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers--
Regards,
Neha Sharma
--
Rushabh Lathia
Thanks Neha for the test LCOV report.
I run the tpch on scale 10 with the latest patch and with latest code
up to 1st Feb (f1169ab501ce90e035a7c6489013a1d4c250ac92).
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- power2 machine with 512GB of RAM
Here are the results: I did the three run and taken median. First
timing is without patch and 2nd is with GM.
Query 3: 45035.425 - 43935.497
Query 4: 7098.259 - 6651.498
Query 5: 37114.338 - 37605.579
Query 9: 87544.144 - 44617.138
Query 10: 43810.497 - 37133.404
Query 12: 20309.993 - 19639.213
Query 15: 61837.415 - 60240.762
Query 17: 134121.961 - 116943.542
Query 18: 248157.735 - 193463.311
Query 20: 203448.405 - 166733.112
Also attaching the output of TPCH runs.
On Fri, Feb 3, 2017 at 5:56 PM, Neha Sharma <neha.sharma@enterprisedb.com>
wrote:
Hi,
I have done some testing with the latest patch
1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GBTest case script with out-file attached.
*LCOV Report :- *
File Names Line Coverage without Test cases Line Coverage with Test cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:Due to recent below commit, patch not getting apply cleanly on
master branch.commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500Add a regression test script dedicated to exercising system views.
Please find attached latest patch.
On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:I am sorry for the delay, here is the latest re-based patch.
my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;QUERY PLAN
------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 msThis patch also fix that issue.
On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael--
Rushabh Lathia--
Rushabh Lathia--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers--
Regards,
Neha Sharma
--
Rushabh Lathia
Attachments:
without_gm.tar.gzapplication/x-gzip; name=without_gm.tar.gzDownload
� �)�X �=k��6���_��~�x�����N��x���v|������r�3�G�I�cc���_� (� J"'���*���Ht7����_�{����1� (!�'U�T?=|C��0��@)|��Q[U�f��,����fu=y{����g
s'�i�aq3����f���+`�y���r�s BR��\I�MBmE�?��i��������W����[�{�}�A�^�|z;]'���|�>� �!$�"M���r����&�M���g�?HN&����&YOo�3�� )h���/��'�$7����C���Y/6��d�Y,n��2]��Mn���������r���_~��+���,���z�>�D�2:Yf��3&�DU�0Yg�w�L&�-��u6;�����a�}�$��0$��D��It��Py�~�>�V��<9���a�����������d=_|�2��&�UN���v���1����'�r��������f�_Y�t9�,�{���M�Y�nRa7��D*����E��r?aF�~G�����A�d��NN���2o����Y�u���8�����Y�b����a�������>{�<i��U�v�[�������z�JT������x:Y_g��<�1��[y�3�\�\J���$����@Rq�o��{dX��f2�e�p���d3�����,��L^M��)r�S��������
U:�*��b)�^��3
���u�)��������z�;���%��$�4�3��}��
�2���z���|����P��T)
�Z�8A#� �d�7t���Jun�Ac�
:���L)�J��N��,J:8;`�h�A��TK�t�4
e�G� @��&��:����V�7�)$U�Xyp&R#K:�8�)��ib
:���a8��������p9I��@��'jh��$�'Q�M
��~r�����|O���F_q�e�C�C]�`��#N�i����xJj�O��&k�W��]��\�)�}���Z�f�X�F��5��5���&�2����a��v�Hr���� �P4
�X
�2[��E������z��e>�y�)���V5&�hL����R�%�{������H���\������+�Z��tS&�a�&�p���&1�]R���j�54Z!Ri\����w��`;��z��
��j�A�>��m�o�����<F���/�id�V����(�G>��=�a�|�}�f�E�7nF�o�Z�"�m�r�
-� �����h9;|�i�W(~(�a���'r
�3�����{�}���6_^e(����'g����u�<MNn`���O��^v���k�,������:f��!Om�B �!�����t�r�J'�R��`�N���h��>t�t,z�~� mZ��k�6;��Z�
=��.*(����JM��kN�0K��k�Z�*P0+Yb���t9Qjb�P��eh�A�}��@�sESi�*�F�c�Wd��d�����Z���*��2�j~V��y��u�L���.�X2���A�E{�z�V�kk����H�%�������C���$�Z������Y?�gW�(T�����9��VM��������L����R��j+c�^������7�i�W�b��{�A���n��J�}[K�t{3&�;�1�������2�z�����-�H7A� �J[����>t������l*�!4}��V}�5��FO�0�����9L�t�|���U����4���6/�6����K��m�@�\�b�q��_��r���{�U�qL$P����
�rc��r:;�r�S��L$�1��)-����r�W������ljp����c��
5���� ���U�T�:�*����]x*!(~�3���s}�V����"�$��a,E�RH��� �$��(A��ut-2&77�Mr���\\Nf�|�,6oo����t�MQ��2�����X� Z}t�� CTZs(3m���#j�9%�ct��Y8��+2�bv��s����.&���[]U[Zg��������f2[O�����vW��b9����j��s������j��7��/7�w������f�_~=]\M��I������rc�]f�E����t�Z/7������U�!�l=��N��$�7�<����Q�����)�M��)=�{�����R�:�_uAE��W��I����A���4Z�H������4Q�����i��9h7rH#X�Fe���/1G���A,��v;W*mno�����#'�Bh�vC����.C�B����ch�| ����Y�C�� ���4���l�������d}y��W�sN���&���4��y{gK�!M!q:l�C����0[z�.m��c���*����L����0������3�-�)��\n�����\�-T�2)�=-K��S � 2��up������?�Z�Y'����9��9hW��eG�j�K�/��$S
g����/�LJS8i�%��'@��.��z:�5�\R_%�+5��6(A���x�1t��9][��w�O�T�Y]Z~]f��A'-�PN`�Z��%�����_��$?={����/����]\��G`R���B���W���sb��F-}o�Q��ul�Fk8����NtX_�~�/��p�����V�%�j�����v�H���
#�=���nSf@�n���|������xL���Ax����j��V
��ZU�.����o//�o'7�{������uv�����5�WhqG�u�c��/�c�>�����A�����6r�e�,�5��I:<�#�H��38��d�F��9�a�nn�Mi}$�A������:�;�+6���*�P����7^���U���J����R�t��(pF�wb�8z>S��=-p<-��]WC��Y�p�����_��B��[B��>�LO~��I���H4t���*�.��P������7b�d
3�bTY,5������.��bv��_���j>���a������W�(QP]{�\}e����^�B�Y�,�/U*���0�����T~�I
���_��q��:��/F
2 lm�H��R����i�����H�S,���Hy������%:��R���I/sk�Hy��Oh��5)y�2�e��r&�(�w,j8�Yw����v~N��B���ghm�`4���\��?�e��%��^��Z�P�M�(��BO�2r��
��(��c
.���������3%z,����9��������j�sj�1r�e��mR2�dv��m�CK{+33��t�.�����������!m!p�}��d;�
��ph,Z)�TQ����P�.�Zu���Y] ��h�.�zu����.�fu���Y] Au��P�.�!�jZ5�E�2��/a�;W�#&�b��>?�\@����������������n���D���=Ey�����/q2���+j[2���H4(�+��H�|���"E�>^F�����l���)!��J3��:�"��E@/�P�eapk��:m]��58j�V7�f
X���)`uS�ZL��V��4�i
X���)`�)`�)`uS��3u#��F�V�w�P����I�-[�3�A�����]�*
���o�A�Z�d8�����RU�tW�I7Td������&�������A�p�fO-h������|���hPBs��;�����);��4����h@��P�\j��dr�w+���O2 1��|wkYlKN�
�������Cv���k�_�1���z����{�����J�'A��d��y��?U�(�o��:�0��>+�
$�Q��"- �d(pE�J�a�m"M$i�;E�(1�F�E�
����,���Y4�D�@����A��!����$�E �dmS�l3����C�v���d��� I��}�1e�@5e�dTG_'��$
1�q )t��=3@���f���m<�L��hj� ;0m��;]�$�X ){ M���?�*I>5��(�� Ii}v�&�A�T�h���Rq$t���C[(�$g&��p�4���T���P����fQ$4�4��4�;�����(�Bv���tl��6��6���I5�( 1M� �I�8*�(���o1$E_�I^O��m)\���q ����D4.�'�4���� GK���e�^v�V��� *;�X�grnH)_g� FUT����]�$Mh�$
�R�>.0b�R'1��d�x�!1�R����� =�&����G�@������!+u�u���_g@F�)r!�Du$�����]h�$4�-�;�'.���}���P��K��v�/P�������_�������+|�}��9��+|j���������@�y �`Ljw-��e�PQw����h�
������V;?�d��=9���������}z��t����F�]RA�9t6O%��k�B�,Y;�<8��`�����.�z��do���v���9�:?��S-gc�QM��U���G[�P]��<�z@�W!~Pv����d�������_QSr���~���r8�h���4Q��a����V6 ��9����U���aK�=������IkF��y���HhO�0��TX�_4�%�k���.Z�;�"�S,rS�B4���ic�8��S�y��$B�D��<��
�gpq�p����M}6���L=/@�9���c7�N���R����)�)3�:���1���G��h���2�v[=+��Q�%�<fjM
���\��WXv���?��J��!�-9�=����TkQ�����eFAj�o�y ����A�D�Z/��`� ���T��
�S�'$��|��)��S���mE��f����I���_�-�/o���U�m�q��[��W~������_.3�qS��g��6�������������������n��f��������8���/�]���^=������G?�T\�����-��������M:K���u��$��h���'��GO���U�&{e�R\���h%���a�f=�9�W�UxQp�^��m~e]y���g+���sb�dkvgo};9�����d�+��uO������<Uv�~�!%��$-�:���M���5�2
�G�M���8Q{s��(�NJ�FI���o��m�������{��$�<o+|��iI0�������)I��$�Dr��1o�R�����v����<l��*��+��8����"b��VQoAYQi��Fb{�V)Q}r�^���J�D�E�D���L`�^"�9*��1�!�J�D��������o����y�V� ���'����������������_6�L���X�~��m���DE3r�����|,��X��� z?�UQ|
��"���������l
�U�q�z8_]Q}�RAi������a�q���y������=H�����~�����<��l��U��;��h�7��D^�������`}W��1?c��p�;};����m����X����g�5j�D�0
h�-2���b[Y$�ZX�����P^S�\~����� Wv[V��N���9&qE������������K��%Q�y��_���z��a���1���m����q���>������F�\������O@P��*�O�T�������{6R�'����t�j����!WJ���c��{�1D��}1_v�������������*9����)q�A�������"$O�/~hkK��/������|����:�,V�l���� ��v�C�y�r�YK�@�������I��'F��@R��X���T�g�Pf�f���a$sP�p���u����N��C��nm�WE
LNR��D
"��"e�:H���=&�m��lo��9�����XV�-��6q9Cb�|���t��4�TJ��I�X]a�!�)?R����|�jeG9-3�"=�"���K���J��=z�'�/�����w��.;�Kmm�V���96���\���t���*��)�z��f7&���K���J�D�A �8�D����H�A��b��|�%�5n��^�q����fR����f1k}��t<@���) �\�L�W�{��G�,�~�_����1�]v]x������S���x1AR�ER4ro���/��P���s��.v���D
��r��Na�DbsQ���&V�������S�`$����R%����$�`hPY�*�K(\���2f�cN�n5 <l �
24^2J���W�a�!�e���0J�>&��@bB�F P�c��4�,Y(�g"i�a� �h�4e
lh��`����
�3��xkT�)���:b��l@���������
T�����3�Z��n�����i��L��!}+�d�WW��Y��Ga1i�����Rd�@�vt��7:���05� �����%�d�;�<�(��wR���]����
vA����
����V�p�43.P|-0Ii��%`&�9������=:�@R~Ka
g��a��$���kII�*���6H�r����Q��
���Q�Q ,���f��������Iu�����������y��}�W&67�J�IXJQ�B�>����zh�E$eF�����e�6�>�[�k�xvL�3����>%����%�\l���L)0S���c�.D��E��q�X*�`ap%�D6a�2[y��� ��`^�����R(�
� Z���^f�s�\/Z�{0O��H����������9��:��2j��O5-MD�S�����<�������Hi(PL��j�R6����M/��^�g�lk�,W,)��F)�����j��5��*#��1u:��4�t�@�����"��r�|�5��L�Ag��|O:`�����U_]����N�K�*k�L�t_P}�Q2�����4�@o�F���fY�����d����\�S��n�%�Z���L����Q6I���v������6P�����Z]�m��VD��jR�c���<\�9��h=�c��^�J}d_`�Fu��t9�
���P'2A������a@��x@���-��#P��r�N?�����[ (���g[��@�Rii�EK���/���#����?(*E*zJ�Rr�RY/,�-Z����/r6�f�"5�#�7�*�������zBFv�7��+1
f|��[?T+�����aq��!
lMX
�K ��T���^�^�N0B!��e���h�Xu-��0��1�5��e<2��`��J����y�oIi+z���lo�\���;���ypk�����\@!�l�*���� ���2���jD;<�.\�9 ������1���z����4�1l�I4V:3f��yk:t�.0j{yS���^4G��z���`.��
��RKq[��osp������O�!�v���t�'�8=� �hA%�q�!X�ZKX�_'�PdSLt�k{I�^#����) /*��%x�"X����DhK�v���RV�_d/
��@��"�F��3I��t���k/Jf���E��P`��_�����r.%#��()z���,a�3/;����,������_���9��E*PP1+�_x��b�{��6X<�|��_�}t��*� p���� Uy�
���e�f-�V� �����C9�2��pI�������o�&��}���� ���A,��x����2X]��Q
'�iRwVN��)��8�+��<yz���������99k�w�G�Z]��h�tP)h�%�O"e�,#�+���1R�������� �������"zg��!4g�d
����u��o��:(�A�r�����q8�\"��C�]t�a��
��/��8�i�(�f��!��|^���)V}Ic��UI�9�\0��{�����H�Er�u�{^\�rK^�%���[(��a%����w������bV���@��#�H����
oi�/��yh��n;�N����gQ���~B|��Rn-XD[K3w�b���WE�2�;X���Z�?|������t��x��i�2����{sNtZcn>�U�E�<�����lv�?�5�j�&����*���,�i_��]6b}[,�����p������<z��A%(�65�z��X�.M^��
�V<��\�������&��?1q�%N�R��o���9l������=�����w���<zx����7�����J��j��
J7��`v��
�7��`�+������Jk�5���E��WE����}����=tr���,
6k���ae�5���v����K�P���.��0x1 �����Qse
�3��I����@�eu��5�t�Z"V�>�"��5��3~)�gFU�g��:�����������i
��7*�C[h)��"���� SU�W���VJ:N�����pa����1<|:��v�d����J� ]%3�cn�}>��|D�L��xD�x�0BI� �!�����P:�-h�����[p�� �.X��jae��S�'�,�����,F yE�DD���k�b����������$�������W���`����1L�^�d��U����w.�Z����)���N��k����G��nk�|�/������N����wor�����L��v���0)J��!J���Do-���KL 7��� ����s����������'�����B�����#��=���mk�`pM���MZ���.�,�e�w}Q�<J��\R�����M��D7d��LI�x�"�����Q$������w���&����*�e���}����
�`�`�0�a�H0Z�����5��H������"�o0i��M���4�������t������{���u������)�gb�Hie�'X�����R;��R���
�V,x�����3-�%\�B6��d�����7b.��3��Dm��e�)������8F2^R������Q
2,i�z%kH�7�
S�wk�
���g<��]X4�������\�����~�;!e�F�Y�DKFNy1�����q��4D�`��F�1{7�UB+0���FRz���t���K"��~���X)��}S$�qp��U�Z�[���x�B�tY{���6��B8�\��3V�1�HLh�.�K�]&�}��� )c���~�MH�D�J/�[Y$u �,�� ��cl�8�p]D=oV'PDI����<��U/��0����
����"�����
���/p�/��>q��6���$�j����]����Dq��O���Y��Zr�`*V YL|?�x���Bg�Z����mhC-���UM�3>EU:������0{�-��D��������� km�H�`c*�T������9�@� ~#2|��K�M�$���J�&����d/�s���� �'����(H)�pS@5V��{Fg'd�)��j��L��H���U���gkXV������7���
�m��]Jw�t�������5gK���{���j������"
�9B)��B�!G�zy��j���Xi#br����������{�
{���,��������[�+c����}��13�N2F�-e��3O �(k-��)UL]�,��I�f8-&����!��O���N&:-9��U"�B�A��9���+��reX�.P��E�G#�����g���peX*��/�!)&d-��KIi��
���B)�����eCcfm@���W��TR�L��}��[�|���l�4�����BT�q��������T����F�y����P��U��!|�U�m���m�C�������B���;M���m��L��HJ\e����/n��rvY4�6���j��1r���U���=�AGe�U�K�u�(Iv�V2g
��Jg���U!G�=h���� �����z��t�M�);PTk�P��4�FI�#��xI�����e�c|8[���o��1�#S�,���S��V�J���p+>x�����ns��3S�K�4�5�E�F�F�*>
W�4���L����{����+m=������������/�����|�s�����������������/���
��[�������tow������E�����
�7�i�SF�6G������XKO��n��y���vQOM�2("��Y�j~���|�7�
u�'��_��O���f�p0��m����'�h����Y z��/Q�G�_�5>�����-Z�o]F�-<v��N�OVj4�B:�����)u(��i����9�3����$�RYKIO�>��j4a&�?*`��������>$ P��Iu�g�����^�����,�C�/t�X=�����(�|S���b�E#F/0p���*��� �1���n�D2���W�Nlh��j�]y�+�G��`N4�N]�dZ���4�n����1��z�k��k?m���u�/����.`^F�9{o��y�[8�#�w<���O��R������������B����\��A�%�?ZYEpL���V:�w=n7����OS����X��15���)���h�Z���a�E���
btdt����:!��/�*��ib���W��O�O�1��1��t?�:u{�{�>��G<<HO�����!��R"�S�%r?=������������|=�;z
x���������bs3S+��K��rm�.����� F�Iz���.����������n[�>8+�m�Q����{���� ��Z��M���k����?�\$�_h�����m.���?|�����>����'9�p�m�o!}�QkL����y�TR�z�/��������{Lu:�@����*b7�JT��.��s�1�)����%�LB�t���G������P�����N`M�-��K�U[���,�o�|�2D8������T���s�F$�6|C�q`�q�m�I�uF!��N2��Q�S���o��!��:[�<catY�e��v`Y�Jt=�:��g��ry�B_�X�yt���(��v4�*���(�����mLm�J��g��Z}�Y�`D n�aGkF0�q4���)�/m��c��{%��B�}Re��g��"`6)�����Y~�mg|�[�<��Z�W�p��4��=�x������y��/][_Z.R10lgW���Z3������!�`������("8k��Za���^��t����^���T^gRu�dJ��
����2�Z�h��"'�%��p��2ANH.�X�8��&�4�%G|���Y*���{��o%s�Mg��u4���|�1 ���0�* GU����}�SM@��e�4T���4!�6�J`��w+���*���_��X-�`���,p�H�I\b�x�� �;5J��k��:�X�����k�����e�:��CiX�%�P�b�����c����Jk#��)g�`��r"������j��r����z��+J)a��n�������f� �Y��,T4V�@�����$H��ei��q�
ePY}�tkZ:���'����)��]��=�]a}oP�����?�.��|���$��pe$\tsUV6����0K<�J�K|iiv�I�C��ZO{1�i-
�Ei����o�X��
����!�l�~�����~��F�GF�?�c#���rv������R2��g9�Wsb]��F��P<