Gather Merge

Started by Rushabh Lathiaover 9 years ago69 messages
#1Rushabh Lathia
rushabh.lathia@gmail.com
5 attachment(s)

Hi hackers,

Attached is the patch to implement Gather Merge. The Gather Merge node would
assume that the results from each worker are ordered with respect to each
other,
and then do a final merge pass over those. This is so that we get the
top-level
query ordering we want. The final plan for such a query would look something
like this:

Gather Merge
-> Sort
-> Parallel Seq Scan on foo
Filter: something

With this we now have a new parallel node which will always return the
sorted
results, so that any query having the pathkey can build the gather merge
path.
Currently if a query has a pathkey and we want to make it parallel-aware,
the
plan would be something like this:

Sort
-> Gather
-> Parallel Seq Scan on foo
Filter: something

With GatherMerge now it is also possible to have plan like:

Finalize GroupAggregate
-> Gather Merge
-> Partial GroupAggregate
-> Sort

With gather merge, sort can be pushed under the Gather Merge. It's valuable
as it has very good performance benefits. Consider the following test
results:

1) ./db/bin/pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;

Without patch:

postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
----------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=81696.51..85242.09 rows=202605 width=12) (actual
time=1037.212..1162.086 rows=200000 loops=1)
Group Key: aid
-> Sort (cost=81696.51..82203.02 rows=202605 width=8) (actual
time=1037.203..1072.446 rows=200000 loops=1)
Sort Key: aid
Sort Method: external sort Disk: 3520kB
-> Seq Scan on pgbench_accounts (cost=0.00..61066.59 rows=202605
width=8) (actual time=801.398..868.390 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.133 ms
Execution time: 1171.656 ms
(10 rows)

With Patch:

postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;

QUERY
PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=47274.13..56644.58 rows=202605 width=12)
(actual time=315.457..561.825 rows=200000 loops=1)
Group Key: aid
-> Gather Merge (cost=47274.13..54365.27 rows=50651 width=0) (actual
time=315.451..451.886 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Partial GroupAggregate (cost=46274.09..47160.49 rows=50651
width=12) (actual time=306.830..333.908 rows=40000 loops=5)
Group Key: aid
-> Sort (cost=46274.09..46400.72 rows=50651 width=8)
(actual time=306.822..310.800 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 2543kB
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..42316.15 rows=50651 width=8) (actual time=237.552..255.968
rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.200 ms
Execution time: 572.221 ms
(15 rows)

I ran the TPCH benchmark queries with the patch and found that 7 out of 22
queries end up picking the Gather Merge path.

Below benchmark numbers taken under following configuration:

- Scale factor = 10
- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- PFA performance machine cpu into (benchmark_machine_info.txt)

Query 4: With GM 7901.480 -> Without GM 9064.776
Query 5: With GM 53452.126 -> Without GM 55059.511
Query 9: With GM 52613.132 -> Without GM 98206.793
Query 15: With GM 68051.058 -> Without GM 68918.378
Query 17: With GM 129236.075 -> Without GM 160451.094
Query 20: With GM 259144.232 -> Without GM 306256.322
Query 21: With GM 153483.497 -> Without GM 168169.916

Here from the results we can see that query 9, 17 and 20 are the one which
show good performance benefit with the Gather Merge.

PFA tpch_result.tar.gz for the explain analysis output for TPCH queries
(with
and without patch)

I ran the TPCH benchmark queries with different number of workers and found
that
Query 18 also started picking Gather merge with worker > 6. PFA attach
TPCH_GatherMerge.pdf for the detail benchmark results.

Implementation details:

New Gather Merge node:

The patch introduces a new node type for Gather Merge. The Gather Merge
implementation is mostly similar to what Gather does. The major difference
is
that the Gather node has two TupleTableSlots; one for leader and one for the
tuple read from the worker, and Gather Merge has a TupleTableSlot per
worker,
plus one for the leader. As for Gather Merge, we need to fill every slot,
then
build a heap of the tuples and return the lowest one.

The patch generates the gather merge path from:

a) create_ordered_paths() for partial_pathlist. If the pathkey contain the
sort_pathkey, then directly create the gather merge. If not then create sort
and then create the gather merge path.

Example:

explain analyze
select * from pgbench_accounts where filler like '%foo%' order by aid;

b) create_distinct_paths(): when sort-based implementations of DISTINCT is
possible.

Example:

explain analyze
select distinct * from pgbench_accounts where filler like '%foo%' order by
aid;

c) create_grouping_paths() : While generating a complete GroupAgg Path, loop
over the partial path list and if partial path contains the group_pathkeys
generate the gather merge path.

Example:

explain analyze
select * from pgbench_accounts where filler like '%foo%' group by aid;

In all the above mentioned cases, with the patches it's giving almost a 2x
performance gain. PFA pgbench_query.out, for the explain analyze output for
the
queries.

Gather Merge reads the tuple from each queue and then picks the lowest one,
so
every time it has to read the tuple from the queue into wait mode. During
testing I found that some of the query spent some time reading tuple
from the queue. So in the patch I introduced the tuple array; once we get
the tuple into wait mode, it tries to read more tuples in nowait mode and
store it into the tuple array. Once we get one tuple through the queue,
there
are chances to have more ready tuple into queue, so just read it and, if
any,
store it to the tuple array. With this I found good performance benefits
with
some of the TPC-H complex queries.

Costing:

GatherMerge merges several pre-sorted input streams using a heap.
Considering
that costing for Gather Merge is the combination of cost_gather +
cost_merge_append.

Open Issue:

- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge that
is
not possible, as we storing the tuple into Tuple array and we want tuple to
be
free only its get pass through the merge sort algorithm. One idea is, we can
also call gm_readnext_tuple() under per-tuple context (which will fix the
leak
into tqueue.c) and then store the copy of tuple into tuple array.
- Need to see a way to add the simple test as part of regression.

Thanks to my colleague Robert Haas for his help in design and some
preliminary review for the patch.

Please let me know your thought, and thanks for reading.

Regards,
Rushabh Lathia
www.EnterpriseDB.com

Attachments:

TPCH_GatherMerge.pdfapplication/pdf; name=TPCH_GatherMerge.pdfDownload
tpch_results.tar.gzapplication/x-gzip; name=tpch_results.tar.gzDownload
gather_merge_v1.patchapplication/x-download; name=gather_merge_v1.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 1247433..cb0299a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..fd884a8
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,693 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *	  routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ *		ExecInitGatherMerge		- initialize the MergeAppend node
+ *		ExecGatherMerge			- retrieve the next tuple from the node
+ *		ExecEndGatherMerge		- shut down the MergeAppend node
+ *		ExecReScanGatherMerge	- rescan the MergeAppend node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTuple
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTuple;
+
+/* Tuple array size */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState * gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool force, bool *done);
+static void gather_merge_init(GatherMergeState * gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState * node);
+static bool gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force);
+static void fill_tuple_array(GatherMergeState * gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge * node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	gm_state->funnel_slot = ExecInitExtraTupleSlot(estate);
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * Initialize funnel slot to same tuple descriptor as outer plan.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	ExecSetSlotDescriptor(gm_state->funnel_slot, tupDesc);
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState * node)
+{
+	TupleTableSlot *fslot = node->funnel_slot;
+	int			i;
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond isDone;
+	ExprContext *econtext;
+
+	/*
+	 * Initialize the parallel context and workers on first execution. We do
+	 * this on first execution rather than during node initialization, as it
+	 * needs to allocate large dynamic segment, so it is better to do if it is
+	 * really needed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+			bool		got_any_worker = false;
+
+			/* Initialize the workers required to execute Gather node. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					if (pcxt->worker[i].bgwhandle == NULL)
+						continue;
+
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   fslot->tts_tupleDescriptor);
+					node->nreaders++;
+					got_any_worker = true;
+				}
+			}
+
+			/* No workers?	Then never mind. */
+			if (!got_any_worker ||
+				node->nreaders < 2)
+			{
+				ExecShutdownGatherMergeWorkers(node);
+				node->nreaders = 0;
+			}
+		}
+
+		/* always allow leader to participate into gather merge */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState * node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState * node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState * node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState * node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull atleast single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState * gm_state)
+{
+	TupleTableSlot *fslot = gm_state->funnel_slot;
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple = (GMReaderTuple *) palloc0(sizeof(GMReaderTuple) * (gm_state->nreaders));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  fslot->tts_tupleDescriptor);
+	}
+
+	/* Allocate the resources for the sort */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+	/*
+	 * First try to read tuple for each worker (including leader) into nowait
+	 * mode, so that we initialize read from each worker as well as leader.
+	 * After this if all worker unable to produce the tuple, then re-read and
+	 * this time read the tuple into wait mode. For the worker, which was able
+	 * to produced single tuple in the earlier loop, just fill the tuple array
+	 * if more tuples available.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (TupIsNull(gm_state->gm_slots[i]) ||
+			gm_state->gm_slots[i]->tts_isempty)
+		{
+			if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			fill_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_isempty)
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState * gm_state)
+{
+	TupleTableSlot *fslot = gm_state->funnel_slot;
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participate we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, true))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return ExecClearTuple(fslot);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return ExecClearTuple(fslot);
+}
+
+/*
+ * Read the tuple for given reader into nowait mode, and fill the tuple array.
+ */
+static void
+fill_tuple_array(GatherMergeState * gm_state, int reader)
+{
+	GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (gm_tuple->nTuples == gm_tuple->readCounter)
+		gm_tuple->nTuples = gm_tuple->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (gm_tuple->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = gm_tuple->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		gm_tuple->tuple[i] = gm_readnext_tuple(gm_state,
+											   reader,
+											   false,
+											   &gm_tuple->done);
+		if (!HeapTupleIsValid(gm_tuple->tuple[i]))
+			break;
+		gm_tuple->nTuples++;
+	}
+}
+
+/*
+ * Function attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * fill the tuple array.
+ *
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force)
+{
+	HeapTuple	tup = NULL;
+
+	/* We here for leader? */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+	/* Does tuple array have any avaiable tuples? */
+	else if (gm_state->gm_tuple[reader].nTuples >
+			 gm_state->gm_tuple[reader].readCounter)
+	{
+		GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+
+		tup = gm_tuple->tuple[gm_tuple->readCounter++];
+	}
+	/* reader exhausted? */
+	else if (gm_state->gm_tuple[reader].done)
+	{
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+		/*
+		 * try to read more tuple into nowait mode and store it into the tuple
+		 * array.
+		 */
+		if (HeapTupleIsValid(tup))
+			fill_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool force, bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+	tup = TupleQueueReaderNext(reader, force ? false : true, done);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..5dea0f7 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,18 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+	WRITE_BOOL_FIELD(single_copy);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3363,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3649,6 +3693,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..5dbb83e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,70 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = path->subpath->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Avoid log(0)...
+	 */
+	N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost. For Gather Merge, require tuple
+	 * to be read into wait mode from each worker, so considering some extra
+	 * cost for the same.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 47158f6..96bed2e 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,11 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+											 GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+									  int nworkers, bool single_copy,
+									  Plan *subplan);
 
 
 /*
@@ -463,6 +468,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+												(GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -2246,6 +2255,90 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
+/*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	gm_plan = make_gather_merge(subplan->targetlist,
+								NIL,
+								best_path->num_workers,
+								best_path->single_copy,
+								subplan);
+
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	if (pathkeys)
+	{
+		/* Compute sort column info, and adjust GatherMerge tlist as needed */
+		(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+										  best_path->path.parent->relids,
+										  NULL,
+										  true,
+										  &gm_plan->numCols,
+										  &gm_plan->sortColIdx,
+										  &gm_plan->sortOperators,
+										  &gm_plan->collations,
+										  &gm_plan->nullsFirst);
+
+
+		/* Compute sort column info, and adjust subplan's tlist as needed */
+		subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+											 best_path->subpath->parent->relids,
+											 gm_plan->sortColIdx,
+											 false,
+											 &numsortkeys,
+											 &sortColIdx,
+											 &sortOperators,
+											 &collations,
+											 &nullsFirst);
+
+		/*
+		 * Check that we got the same sort key information.  We just Assert
+		 * that the sortops match, since those depend only on the pathkeys;
+		 * but it seems like a good idea to check the sort column numbers
+		 * explicitly, to ensure the tlists really do match up.
+		 */
+		Assert(numsortkeys == gm_plan->numCols);
+		if (memcmp(sortColIdx, gm_plan->sortColIdx,
+				   numsortkeys * sizeof(AttrNumber)) != 0)
+			elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+		Assert(memcmp(sortOperators, gm_plan->sortOperators,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(collations, gm_plan->collations,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+					  numsortkeys * sizeof(bool)) == 0);
+
+		/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+		if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+			subplan = (Plan *) make_sort(subplan, numsortkeys,
+										 sortColIdx, sortOperators,
+										 collations, nullsFirst);
+
+		gm_plan->plan.lefttree = subplan;
+	}
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
 
 /*****************************************************************************
  *
@@ -5902,6 +5995,26 @@ make_gather(List *qptlist,
 	return node;
 }
 
+static GatherMerge *
+make_gather_merge(List *qptlist,
+				  List *qpqual,
+				  int nworkers,
+				  bool single_copy,
+				  Plan *subplan)
+{
+	GatherMerge	*node = makeNode(GatherMerge);
+	Plan		*plan = &node->plan;
+
+	/* cost should be inserted by caller */
+	plan->targetlist = qptlist;
+	plan->qual = qpqual;
+	plan->lefttree = subplan;
+	plan->righttree = NULL;
+	node->num_workers = nworkers;
+
+	return node;
+}
+
 /*
  * distinctList is a list of SortGroupClauses, identifying the targetlist
  * items that should be considered by the SetOp filter.  The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f657ffc..7339f03 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3719,14 +3719,59 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path. We generate a Gather path based on the cheapest partial path,
+		 * and a GatherMerge path for each partial path that is properly sorted.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) linitial(grouped_rel->partial_pathlist);
 			double		total_groups = path->rows * path->parallel_workers;
 
+			/*
+			 * GatherMerge is always sorted, so if there is GROUP BY clause,
+			 * try to generate GatherMerge path for each partial path.
+			 */
+			if (parse->groupClause)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *gmpath = (Path *) lfirst(lc);
+
+					if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+						continue;
+
+					/* create gather merge path */
+					gmpath = (Path *) create_gather_merge_path(root,
+															   grouped_rel,
+															   gmpath,
+															   NULL,
+															   root->group_pathkeys,
+															   NULL);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+												 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								create_group_path(root,
+												  grouped_rel,
+												  gmpath,
+												  target,
+												  parse->groupClause,
+												  (List *) parse->havingQual,
+												  dNumGroups));
+				}
+			}
+
 			path = (Path *) create_gather_path(root,
 											   grouped_rel,
 											   path,
@@ -3864,6 +3909,12 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * Partial pathlist generated for grouped relation are no further useful,
+	 * so just reset it with null.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4160,6 +4211,36 @@ create_distinct_paths(PlannerInfo *root,
 			}
 		}
 
+		/*
+		 * Generate GatherMerge path for each partial path.
+		 */
+		foreach(lc, input_rel->partial_pathlist)
+		{
+			Path	   *path = (Path *) lfirst(lc);
+
+			if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+			{
+				path = (Path *) create_sort_path(root, distinct_rel,
+												 path,
+												 needed_pathkeys,
+												 -1.0);
+			}
+
+			/* create gather merge path */
+			path = (Path *) create_gather_merge_path(root,
+													 distinct_rel,
+													 path,
+													 NULL,
+													 needed_pathkeys,
+													 NULL);
+			add_path(distinct_rel, (Path *)
+					 create_upper_unique_path(root,
+											  distinct_rel,
+											  path,
+											  list_length(root->distinct_pathkeys),
+											  numDistinctRows));
+		}
+
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
@@ -4304,6 +4385,39 @@ create_ordered_paths(PlannerInfo *root,
 	ordered_rel->useridiscurrent = input_rel->useridiscurrent;
 	ordered_rel->fdwroutine = input_rel->fdwroutine;
 
+	foreach(lc, input_rel->partial_pathlist)
+	{
+		Path	   *path = (Path *) lfirst(lc);
+		bool		is_sorted;
+
+		is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+										  path->pathkeys);
+		if (!is_sorted)
+		{
+			/* An explicit sort here can take advantage of LIMIT */
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+		}
+
+		/* create gather merge path */
+		path = (Path *) create_gather_merge_path(root,
+												 ordered_rel,
+												 path,
+												 target,
+												 root->sort_pathkeys,
+												 NULL);
+
+		/* Add projection step if needed */
+		if (path->pathtarget != target)
+			path = apply_projection_to_path(root, ordered_rel,
+											path, target);
+
+		add_path(ordered_rel, path);
+	}
+
 	foreach(lc, input_rel->pathlist)
 	{
 		Path	   *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..f83cd77 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  0 /* FIXME: pathnode->limit_tuples*/);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 622279b..502f17d 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 4fa3661..54d929f 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1963,6 +1963,33 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleTableSlot *funnel_slot;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTuple *gm_tuple;	/* array of lenght nreaders + leader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..dfaca79 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. FIXME: comments
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+	bool		single_copy;	/* path must not be executed >1x */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..cd48cc4 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,8 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,10 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
 						  List **restrict_clauses);
 extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
 							Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel, Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer);
 
 #endif   /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
benchmark_machine_info.txttext/plain; charset=US-ASCII; name=benchmark_machine_info.txtDownload
pgbench_query.outapplication/octet-stream; name=pgbench_query.outDownload
#2Amit Kapila
amit.kapila16@gmail.com
In reply to: Rushabh Lathia (#1)
Re: Gather Merge

On Wed, Oct 5, 2016 at 11:35 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Hi hackers,

Attached is the patch to implement Gather Merge.

Couple of review comments:

1.
ExecGatherMerge()
{
..
+ /* No workers? Then never mind. */
+ if (!got_any_worker
||
+ node->nreaders < 2)
+ {
+
ExecShutdownGatherMergeWorkers(node);
+ node->nreaders = 0;
+
}

Are you planning to restrict the use of gather merge based on number
of workers, if there is a valid reason, then I think comments should
be updated for same?

2.
+ExecGatherMerge(GatherMergeState * node){
..
+ if (!node->initialized)
+ {
+ EState   *estate = node->ps.state;
+
GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we
might have to run without parallelism; but if parallel
+ * mode is active then we can try to
fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+
{
+ ParallelContext *pcxt;
+ bool got_any_worker =
false;
+
+ /* Initialize the workers required to execute Gather node. */
+
if (!node->pei)
+ node->pei = ExecInitParallelPlan(node-

ps.lefttree,

+
estate,
+
gm->num_workers);
..
}

There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?

I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.

3.
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool force)
{
..
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+
 * try to read more tuple into nowait mode and store it into the tuple
+ * array.
+
 */
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);

How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().

4.
+create_gather_merge_path(..)
{
..
+  0 /* FIXME: pathnode->limit_tuples*/);

What exactly you want to fix in above code?

5.
 +/* Tuple array size */
+#define MAX_TUPLE_STORE 10

Have you tried with other values of MAX_TUPLE_STORE? If yes, then
what are the results? I think it is better to add a comment why array
size is best for performance.

6.
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the MergeAppend
node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ *
ExecEndGatherMerge - shut down the MergeAppend node
+ *
ExecReScanGatherMerge - rescan the MergeAppend node

typo. /MergeAppend/GatherMerge

7.
+static TupleTableSlot *gather_merge_getnext(GatherMergeState * gm_state);
+static HeapTuple
gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool
force, bool *done);

Formatting near GatherMergeState doesn't seem to be appropriate. I
think you need to add GatherMergeState in typedefs.list and then run
pgindent again.

8.
+ /*
+ * Initialize funnel slot to same tuple descriptor as outer plan.
+ */
+ if
(!ExecContextForcesOids(&gm_state->ps, &hasoid))

I think in above comment, you mean Initialize GatherMerge slot.

9.
+ /* Does tuple array have any avaiable tuples? */
/avaiable/available

Open Issue:

- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge that
is
not possible, as we storing the tuple into Tuple array and we want tuple to
be
free only its get pass through the merge sort algorithm. One idea is, we can
also call gm_readnext_tuple() under per-tuple context (which will fix the
leak
into tqueue.c) and then store the copy of tuple into tuple array.

Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#2)
Re: Gather Merge

On Mon, Oct 17, 2016 at 4:56 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

+ node->nreaders < 2)

...

I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.

single_copy doesn't make sense for GatherMerge, because the whole
point is to merge a bunch of individually-sorted output streams into a
single stream. If you have only one stream of tuples, you don't need
to merge anything: you could have just used Gather for that.

It does, however, make sense to merge one worker's output with the
leader's output.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Amit Kapila (#2)
1 attachment(s)
Re: Gather Merge

Thanks Amit for reviewing this patch.

On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Oct 5, 2016 at 11:35 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Hi hackers,

Attached is the patch to implement Gather Merge.

Couple of review comments:

1.
ExecGatherMerge()
{
..
+ /* No workers? Then never mind. */
+ if (!got_any_worker
||
+ node->nreaders < 2)
+ {
+
ExecShutdownGatherMergeWorkers(node);
+ node->nreaders = 0;
+
}

Are you planning to restrict the use of gather merge based on number
of workers, if there is a valid reason, then I think comments should
be updated for same?

Thanks for catching this. This is left over from the earlier design patch.
With
current design we don't have any limitation for the number of worker . I did
the performance testing with setting max_parallel_workers_per_gather to 1
and didn't noticed any performance degradation. So I removed this limitation
into attached patch.

2.

+ExecGatherMerge(GatherMergeState * node){
..
+ if (!node->initialized)
+ {
+ EState   *estate = node->ps.state;
+
GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+ /*
+ * Sometimes we
might have to run without parallelism; but if parallel
+ * mode is active then we can try to
fire up some workers.
+ */
+ if (gm->num_workers > 0 && IsInParallelMode())
+
{
+ ParallelContext *pcxt;
+ bool got_any_worker =
false;
+
+ /* Initialize the workers required to execute Gather node. */
+
if (!node->pei)
+ node->pei = ExecInitParallelPlan(node-

ps.lefttree,

+
estate,
+
gm->num_workers);
..
}

There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?

I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.

Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might change
particularly
for the Gather Merge. And as explained by Robert single_copy doesn't make
sense
for the Gather Merge. I will still look into this to see if something can
be make
centralize.

3.
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool
force)
{
..
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+
* try to read more tuple into nowait mode and store it into the tuple
+ * array.
+
*/
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);

How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().

Yes, you are right. First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try to
read tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details). Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separate function
that code
look more clear. If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.

4.
+create_gather_merge_path(..)
{
..
+  0 /* FIXME: pathnode->limit_tuples*/);

What exactly you want to fix in above code?

Fixed.

5.
+/* Tuple array size */
+#define MAX_TUPLE_STORE 10

Have you tried with other values of MAX_TUPLE_STORE? If yes, then
what are the results? I think it is better to add a comment why array
size is best for performance.

Actually I was thinking on that, but I don't wanted to add their because
its just
performance number on my machine. Anyway I added the comments.

6.
+/* INTERFACE ROUTINES
+ * ExecInitGatherMerge - initialize the MergeAppend
node
+ * ExecGatherMerge - retrieve the next tuple from the node
+ *
ExecEndGatherMerge - shut down the MergeAppend node
+ *
ExecReScanGatherMerge - rescan the MergeAppend node

typo. /MergeAppend/GatherMerge

Fixed.

7.
+static TupleTableSlot *gather_merge_getnext(GatherMergeState * gm_state);
+static HeapTuple
gm_readnext_tuple(GatherMergeState * gm_state, int nreader, bool
force, bool *done);

Formatting near GatherMergeState doesn't seem to be appropriate. I
think you need to add GatherMergeState in typedefs.list and then run
pgindent again.

Fixed.

8.
+ /*
+ * Initialize funnel slot to same tuple descriptor as outer plan.
+ */
+ if
(!ExecContextForcesOids(&gm_state->ps, &hasoid))

I think in above comment, you mean Initialize GatherMerge slot.

No, it has to be funnel slot only - its just place holder. For Gather
Merge, initialize
one slot per worker and it is done into gather_merge_init(). I will look
into this point
to see if I can get rid of funnel slot completely.

9.

+ /* Does tuple array have any avaiable tuples? */
/avaiable/available

Fixed.

Open Issue:

- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge

that

is
not possible, as we storing the tuple into Tuple array and we want tuple

to

be
free only its get pass through the merge sort algorithm. One idea is, we

can

also call gm_readnext_tuple() under per-tuple context (which will fix the
leak
into tqueue.c) and then store the copy of tuple into tuple array.

Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?

I don't think was specificially for the record or composite types - but I
might be
wrong. As per my understanding commit fix leak into tqueue.c. Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.

I have idea to fix this by calling the TupleQueueReaderNext() with
per-tuple context,
and then copy the tuple and store it to the tuple array and later with the
next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Rushabh Lathia

Attachments:

gather_merge_v2.patchapplication/x-download; name=gather_merge_v2.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 1247433..cb0299a 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..cbd1dd2
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,693 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *	  routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ *		ExecInitGatherMerge		- initialize the GatherMerge node
+ *		ExecGatherMerge			- retrieve the next tuple from the node
+ *		ExecEndGatherMerge		- shut down the GatherMerge node
+ *		ExecReScanGatherMerge	- rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTuple
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTuple;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force);
+static void fill_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	gm_state->funnel_slot = ExecInitExtraTupleSlot(estate);
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * Initialize funnel slot to same tuple descriptor as outer plan.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	ExecSetSlotDescriptor(gm_state->funnel_slot, tupDesc);
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *fslot = node->funnel_slot;
+	int			i;
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond isDone;
+	ExprContext *econtext;
+
+	/*
+	 * Initialize the parallel context and workers on first execution. We do
+	 * this on first execution rather than during node initialization, as it
+	 * needs to allocate large dynamic segment, so it is better to do if it is
+	 * really needed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+			bool		got_any_worker = false;
+
+			/* Initialize the workers required to execute Gather node. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					if (pcxt->worker[i].bgwhandle == NULL)
+						continue;
+
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   fslot->tts_tupleDescriptor);
+					node->nreaders++;
+					got_any_worker = true;
+				}
+			}
+
+			/* No workers?	Then never mind. */
+			if (!got_any_worker)
+				ExecShutdownGatherMergeWorkers(node);
+		}
+
+		/* always allow leader to participate into gather merge */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull atleast single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	TupleTableSlot *fslot = gm_state->funnel_slot;
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple = (GMReaderTuple *) palloc0(sizeof(GMReaderTuple) * (gm_state->nreaders));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  fslot->tts_tupleDescriptor);
+	}
+
+	/* Allocate the resources for the sort */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+	/*
+	 * First try to read tuple for each worker (including leader) into nowait
+	 * mode, so that we initialize read from each worker as well as leader.
+	 * After this if all worker unable to produce the tuple, then re-read and
+	 * this time read the tuple into wait mode. For the worker, which was able
+	 * to produced single tuple in the earlier loop, just fill the tuple array
+	 * if more tuples available.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (TupIsNull(gm_state->gm_slots[i]) ||
+			gm_state->gm_slots[i]->tts_isempty)
+		{
+			if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			fill_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_isempty)
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	TupleTableSlot *fslot = gm_state->funnel_slot;
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participate we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, true))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return ExecClearTuple(fslot);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return ExecClearTuple(fslot);
+}
+
+/*
+ * Read the tuple for given reader into nowait mode, and fill the tuple array.
+ */
+static void
+fill_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (gm_tuple->nTuples == gm_tuple->readCounter)
+		gm_tuple->nTuples = gm_tuple->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (gm_tuple->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = gm_tuple->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		gm_tuple->tuple[i] = gm_readnext_tuple(gm_state,
+											   reader,
+											   false,
+											   &gm_tuple->done);
+		if (!HeapTupleIsValid(gm_tuple->tuple[i]))
+			break;
+		gm_tuple->nTuples++;
+	}
+}
+
+/*
+ * Function attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * fill the tuple array.
+ *
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)
+{
+	HeapTuple	tup = NULL;
+
+	/* We here for leader? */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+	/* Does tuple array have any available tuples? */
+	else if (gm_state->gm_tuple[reader].nTuples >
+			 gm_state->gm_tuple[reader].readCounter)
+	{
+		GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+
+		tup = gm_tuple->tuple[gm_tuple->readCounter++];
+	}
+	/* reader exhausted? */
+	else if (gm_state->gm_tuple[reader].done)
+	{
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+		/*
+		 * try to read more tuple into nowait mode and store it into the tuple
+		 * array.
+		 */
+		if (HeapTupleIsValid(tup))
+			fill_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+	tup = TupleQueueReaderNext(reader, force ? false : true, done);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..5dea0f7 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,18 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+	WRITE_BOOL_FIELD(single_copy);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3363,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3649,6 +3693,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..5dbb83e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,70 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = path->subpath->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Avoid log(0)...
+	 */
+	N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost. For Gather Merge, require tuple
+	 * to be read into wait mode from each worker, so considering some extra
+	 * cost for the same.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 47158f6..96bed2e 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,11 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+											 GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+									  int nworkers, bool single_copy,
+									  Plan *subplan);
 
 
 /*
@@ -463,6 +468,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+												(GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -2246,6 +2255,90 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
+/*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	gm_plan = make_gather_merge(subplan->targetlist,
+								NIL,
+								best_path->num_workers,
+								best_path->single_copy,
+								subplan);
+
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	if (pathkeys)
+	{
+		/* Compute sort column info, and adjust GatherMerge tlist as needed */
+		(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+										  best_path->path.parent->relids,
+										  NULL,
+										  true,
+										  &gm_plan->numCols,
+										  &gm_plan->sortColIdx,
+										  &gm_plan->sortOperators,
+										  &gm_plan->collations,
+										  &gm_plan->nullsFirst);
+
+
+		/* Compute sort column info, and adjust subplan's tlist as needed */
+		subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+											 best_path->subpath->parent->relids,
+											 gm_plan->sortColIdx,
+											 false,
+											 &numsortkeys,
+											 &sortColIdx,
+											 &sortOperators,
+											 &collations,
+											 &nullsFirst);
+
+		/*
+		 * Check that we got the same sort key information.  We just Assert
+		 * that the sortops match, since those depend only on the pathkeys;
+		 * but it seems like a good idea to check the sort column numbers
+		 * explicitly, to ensure the tlists really do match up.
+		 */
+		Assert(numsortkeys == gm_plan->numCols);
+		if (memcmp(sortColIdx, gm_plan->sortColIdx,
+				   numsortkeys * sizeof(AttrNumber)) != 0)
+			elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+		Assert(memcmp(sortOperators, gm_plan->sortOperators,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(collations, gm_plan->collations,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+					  numsortkeys * sizeof(bool)) == 0);
+
+		/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+		if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+			subplan = (Plan *) make_sort(subplan, numsortkeys,
+										 sortColIdx, sortOperators,
+										 collations, nullsFirst);
+
+		gm_plan->plan.lefttree = subplan;
+	}
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
 
 /*****************************************************************************
  *
@@ -5902,6 +5995,26 @@ make_gather(List *qptlist,
 	return node;
 }
 
+static GatherMerge *
+make_gather_merge(List *qptlist,
+				  List *qpqual,
+				  int nworkers,
+				  bool single_copy,
+				  Plan *subplan)
+{
+	GatherMerge	*node = makeNode(GatherMerge);
+	Plan		*plan = &node->plan;
+
+	/* cost should be inserted by caller */
+	plan->targetlist = qptlist;
+	plan->qual = qpqual;
+	plan->lefttree = subplan;
+	plan->righttree = NULL;
+	node->num_workers = nworkers;
+
+	return node;
+}
+
 /*
  * distinctList is a list of SortGroupClauses, identifying the targetlist
  * items that should be considered by the SetOp filter.  The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 644b8b6..0325c53 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,59 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path. We generate a Gather path based on the cheapest partial path,
+		 * and a GatherMerge path for each partial path that is properly sorted.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) linitial(grouped_rel->partial_pathlist);
 			double		total_groups = path->rows * path->parallel_workers;
 
+			/*
+			 * GatherMerge is always sorted, so if there is GROUP BY clause,
+			 * try to generate GatherMerge path for each partial path.
+			 */
+			if (parse->groupClause)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *gmpath = (Path *) lfirst(lc);
+
+					if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+						continue;
+
+					/* create gather merge path */
+					gmpath = (Path *) create_gather_merge_path(root,
+															   grouped_rel,
+															   gmpath,
+															   NULL,
+															   root->group_pathkeys,
+															   NULL);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+												 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								create_group_path(root,
+												  grouped_rel,
+												  gmpath,
+												  target,
+												  parse->groupClause,
+												  (List *) parse->havingQual,
+												  dNumGroups));
+				}
+			}
+
 			path = (Path *) create_gather_path(root,
 											   grouped_rel,
 											   path,
@@ -3870,6 +3915,12 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * Partial pathlist generated for grouped relation are no further useful,
+	 * so just reset it with null.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4166,6 +4217,36 @@ create_distinct_paths(PlannerInfo *root,
 			}
 		}
 
+		/*
+		 * Generate GatherMerge path for each partial path.
+		 */
+		foreach(lc, input_rel->partial_pathlist)
+		{
+			Path	   *path = (Path *) lfirst(lc);
+
+			if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+			{
+				path = (Path *) create_sort_path(root, distinct_rel,
+												 path,
+												 needed_pathkeys,
+												 -1.0);
+			}
+
+			/* create gather merge path */
+			path = (Path *) create_gather_merge_path(root,
+													 distinct_rel,
+													 path,
+													 NULL,
+													 needed_pathkeys,
+													 NULL);
+			add_path(distinct_rel, (Path *)
+					 create_upper_unique_path(root,
+											  distinct_rel,
+											  path,
+											  list_length(root->distinct_pathkeys),
+											  numDistinctRows));
+		}
+
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
@@ -4310,6 +4391,39 @@ create_ordered_paths(PlannerInfo *root,
 	ordered_rel->useridiscurrent = input_rel->useridiscurrent;
 	ordered_rel->fdwroutine = input_rel->fdwroutine;
 
+	foreach(lc, input_rel->partial_pathlist)
+	{
+		Path	   *path = (Path *) lfirst(lc);
+		bool		is_sorted;
+
+		is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+										  path->pathkeys);
+		if (!is_sorted)
+		{
+			/* An explicit sort here can take advantage of LIMIT */
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+		}
+
+		/* create gather merge path */
+		path = (Path *) create_gather_merge_path(root,
+												 ordered_rel,
+												 path,
+												 target,
+												 root->sort_pathkeys,
+												 NULL);
+
+		/* Add projection step if needed */
+		if (path->pathtarget != target)
+			path = apply_projection_to_path(root, ordered_rel,
+											path, target);
+
+		add_path(ordered_rel, path);
+	}
+
 	foreach(lc, input_rel->pathlist)
 	{
 		Path	   *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..822fca2 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..f605284 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..3feb3f1 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleTableSlot *funnel_slot;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTuple *gm_tuple;	/* array of lenght nreaders + leader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..dfaca79 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. FIXME: comments
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+	bool		single_copy;	/* path must not be executed >1x */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..cd48cc4 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,8 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,10 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
 						  List **restrict_clauses);
 extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
 							Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel, Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer);
 
 #endif   /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#5Peter Geoghegan
pg@heroku.com
In reply to: Rushabh Lathia (#1)
Re: Gather Merge

On Tue, Oct 4, 2016 at 11:05 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Query 4: With GM 7901.480 -> Without GM 9064.776
Query 5: With GM 53452.126 -> Without GM 55059.511
Query 9: With GM 52613.132 -> Without GM 98206.793
Query 15: With GM 68051.058 -> Without GM 68918.378
Query 17: With GM 129236.075 -> Without GM 160451.094
Query 20: With GM 259144.232 -> Without GM 306256.322
Query 21: With GM 153483.497 -> Without GM 168169.916

Here from the results we can see that query 9, 17 and 20 are the one which
show good performance benefit with the Gather Merge.

Were all other TPC-H queries unaffected? IOW, did they have the same
plan as before with your patch applied? Did you see any regressions?

I assume that this patch has each worker use work_mem for its own
sort, as with hash joins today. One concern with that model when
testing is that you could end up with a bunch of internal sorts for
cases with a GM node, where you get one big external sort for cases
without one. Did you take that into consideration?

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Rushabh Lathia (#4)
Re: Gather Merge

On Tue, Oct 18, 2016 at 5:29 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?

I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.

Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might change
particularly
for the Gather Merge. And as explained by Robert single_copy doesn't make
sense
for the Gather Merge. I will still look into this to see if something can be
make
centralize.

Okay, I haven't thought about it, but do let me know if you couldn't
find any way to merge the code.

3.
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool
force)
{
..
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+
* try to read more tuple into nowait mode and store it into the tuple
+ * array.
+
*/
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);

How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().

Yes, you are right.

You have not answered my first question. I will try to ask again, how
the tuple read by below code is stored in the array:

+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);

First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try to read
tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details).

Whenever any worker produced one tuple, you already try to fill the
array in gather_merge_readnext(), so does the above code help much?

Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separate function
that code
look more clear.

To me that looks slightly confusing.

If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.

form_tuple_array (form_tuple is used at many places in code, so it
should look okay). I think you might want to consider forming array
even for leader, although it might not be as beneficial as for
workers, OTOH, the code will get simplified if we do that way.

Today, I observed another issue in code:

+gather_merge_init(GatherMergeState *gm_state)
{
..
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty)
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ fill_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_isempty)
+ goto reread;
..
}

This code can cause infinite loop. Consider a case where one of the
worker doesn't get any tuple because by the time it starts all the
tuples are consumed by all other workers. The above code will keep on
looping to fetch the tuple from that worker whereas that worker can't
return any tuple. I think you can fix it by checking if the
particular queue has been exhausted.

Open Issue:

- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge
that
is
not possible, as we storing the tuple into Tuple array and we want tuple
to
be
free only its get pass through the merge sort algorithm. One idea is, we
can
also call gm_readnext_tuple() under per-tuple context (which will fix
the
leak
into tqueue.c) and then store the copy of tuple into tuple array.

Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?

I don't think was specificially for the record or composite types - but I
might be
wrong. As per my understanding commit fix leak into tqueue.c.

Hmm, in tqueue.c, I think the memory leak was remapping logic, refer
mail [1]/messages/by-id/32763.1469821037@sss.pgh.pa.us of Tom (Just to add insult to injury, the backend's memory
consumption bloats to something over 5.5G during that last query).

Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.

I have idea to fix this by calling the TupleQueueReaderNext() with per-tuple
context,
and then copy the tuple and store it to the tuple array and later with the
next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.

Okay, if you think that is viable, then you can do it, but do check
the performance impact of same, because extra copy per fetched tuple
can impact performance.

[1]: /messages/by-id/32763.1469821037@sss.pgh.pa.us

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Peter Geoghegan (#5)
Re: Gather Merge

On Thu, Oct 20, 2016 at 12:22 AM, Peter Geoghegan <pg@heroku.com> wrote:

On Tue, Oct 4, 2016 at 11:05 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Query 4: With GM 7901.480 -> Without GM 9064.776
Query 5: With GM 53452.126 -> Without GM 55059.511
Query 9: With GM 52613.132 -> Without GM 98206.793
Query 15: With GM 68051.058 -> Without GM 68918.378
Query 17: With GM 129236.075 -> Without GM 160451.094
Query 20: With GM 259144.232 -> Without GM 306256.322
Query 21: With GM 153483.497 -> Without GM 168169.916

Here from the results we can see that query 9, 17 and 20 are the one

which

show good performance benefit with the Gather Merge.

Were all other TPC-H queries unaffected? IOW, did they have the same
plan as before with your patch applied? Did you see any regressions?

Yes, all other TPC-H queries where unaffected with the patch. At the
initially stage of patch development I noticed the regressions, but then
realize that it is because I am not allowing leader to participate in the
GM. Later on I fixed that and after that I didn't noticed any regressions.

I assume that this patch has each worker use work_mem for its own

sort, as with hash joins today. One concern with that model when
testing is that you could end up with a bunch of internal sorts for
cases with a GM node, where you get one big external sort for cases
without one. Did you take that into consideration?

Yes, but isn't that good? Please correct me if I am missing anything.

--
Peter Geoghegan

--
Rushabh Lathia
www.EnterpriseDB.com

#8Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Amit Kapila (#6)
Re: Gather Merge

On Thu, Oct 20, 2016 at 1:12 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Oct 18, 2016 at 5:29 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?

I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.

Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might change
particularly
for the Gather Merge. And as explained by Robert single_copy doesn't make
sense
for the Gather Merge. I will still look into this to see if something

can be

make
centralize.

Okay, I haven't thought about it, but do let me know if you couldn't
find any way to merge the code.

Sure, I will look into this.

3.
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool
force)
{
..
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+
* try to read more tuple into nowait mode and store it into the tuple
+ * array.
+
*/
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);

How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().

Yes, you are right.

You have not answered my first question. I will try to ask again, how
the tuple read by below code is stored in the array:

Tuple directly get stored into related TupleTableSlot.
In gather_merge_readnext()
at the end of function it build the TupleTableSlot for the given tuple. So
tuple
read by above code is directly stored into TupleTableSlot.

+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);

First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try to

read

tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details).

Whenever any worker produced one tuple, you already try to fill the
array in gather_merge_readnext(), so does the above code help much?

Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separate

function

that code
look more clear.

To me that looks slightly confusing.

If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.

form_tuple_array (form_tuple is used at many places in code, so it
should look okay).

Ok, I rename it with next patch.

I think you might want to consider forming array
even for leader, although it might not be as beneficial as for
workers, OTOH, the code will get simplified if we do that way.

Yes, I did that earlier - and as you guessed its not be any beneficial
so to avoided extra memory allocation for the tuple array, I am not
forming array for leader.

Today, I observed another issue in code:

+gather_merge_init(GatherMergeState *gm_state)
{
..
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty)
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ fill_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_
isempty)
+ goto reread;
..
}

This code can cause infinite loop. Consider a case where one of the
worker doesn't get any tuple because by the time it starts all the
tuples are consumed by all other workers. The above code will keep on
looping to fetch the tuple from that worker whereas that worker can't
return any tuple. I think you can fix it by checking if the
particular queue has been exhausted.

Oh yes. I will work on the fix and soon submit the next set of patch.

Open Issue:

- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak into
tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather merge
that
is
not possible, as we storing the tuple into Tuple array and we want

tuple

to
be
free only its get pass through the merge sort algorithm. One idea is,

we

can
also call gm_readnext_tuple() under per-tuple context (which will fix
the
leak
into tqueue.c) and then store the copy of tuple into tuple array.

Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?

I don't think was specificially for the record or composite types - but I
might be
wrong. As per my understanding commit fix leak into tqueue.c.

Hmm, in tqueue.c, I think the memory leak was remapping logic, refer
mail [1] of Tom (Just to add insult to injury, the backend's memory
consumption bloats to something over 5.5G during that last query).

Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.

I have idea to fix this by calling the TupleQueueReaderNext() with

per-tuple

context,
and then copy the tuple and store it to the tuple array and later with

the

next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.

Okay, if you think that is viable, then you can do it, but do check
the performance impact of same, because extra copy per fetched tuple
can impact performance.

Sure, I will check the performance impact for the same.

[1] - /messages/by-id/32763.1469821037%25
40sss.pgh.pa.us

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Thanks,
Rushabh Lathia
www.EnterpriseDB.com

#9Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#8)
1 attachment(s)
Re: Gather Merge

Please find attached latest patch which fix the review point as well as
additional clean-up.

- Get rid of funnel_slot as its not needed for the Gather Merge
- renamed fill_tuple_array to form_tuple_array
- Fix possible infinite loop into gather_merge_init (Reported by Amit
Kaplia)
- Fix tqueue.c memory leak, by calling TupleQueueReaderNext() with
per-tuple context.
- Code cleanup into ExecGatherMerge.
- Added new function gather_merge_clear_slots(), which clear out all gather
merge slots and also free tuple array at end of execution.

I did the performance testing again with the latest patch and I haven't
observed any regression. Some of TPC-H queries showing additional benefit
with
the latest patch, but its just under 5%.

Do let me know if I missed anything.

On Mon, Oct 24, 2016 at 11:55 AM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

On Thu, Oct 20, 2016 at 1:12 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Oct 18, 2016 at 5:29 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Mon, Oct 17, 2016 at 2:26 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

There is lot of common code between ExecGatherMerge and ExecGather.
Do you think it makes sense to have a common function to avoid the
duplicity?

I see there are small discrepancies in both the codes like I don't see
the use of single_copy flag, as it is present in gather node.

Yes, even I thought to centrilize some things of ExecGather and
ExecGatherMerge,
but its really not something that is fixed. And I thought it might

change

particularly
for the Gather Merge. And as explained by Robert single_copy doesn't

make

sense
for the Gather Merge. I will still look into this to see if something

can be

make
centralize.

Okay, I haven't thought about it, but do let me know if you couldn't
find any way to merge the code.

Sure, I will look into this.

3.
+gather_merge_readnext(GatherMergeState * gm_state, int reader, bool
force)
{
..
+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);
+
+ /*
+
* try to read more tuple into nowait mode and store it into the tuple
+ * array.
+
*/
+ if (HeapTupleIsValid(tup))
+ fill_tuple_array(gm_state, reader);

How the above read tuple is stored in array? In anycase the above
interface seems slightly awkward to me. Basically, I think what you
are trying to do here is after reading first tuple in waitmode, you
are trying to fill the array by reading more tuples. So, can't we
push reading of this fist tuple into that function and name it as
form_tuple_array().

Yes, you are right.

You have not answered my first question. I will try to ask again, how
the tuple read by below code is stored in the array:

Tuple directly get stored into related TupleTableSlot.
In gather_merge_readnext()
at the end of function it build the TupleTableSlot for the given tuple. So
tuple
read by above code is directly stored into TupleTableSlot.

+ tup = gm_readnext_tuple(gm_state, reader, force, NULL);

First its trying to read tuple into wait-mode, and once
it
find tuple then its try to fill the tuple array (which basically try to

read

tuple
into nowait-mode). Reason I keep it separate is because in case of
initializing
the gather merge, if we unable to read tuple from all the worker - while
trying
re-read, we again try to fill the tuple array for the worker who already
produced
atleast a single tuple (see gather_merge_init() for more details).

Whenever any worker produced one tuple, you already try to fill the
array in gather_merge_readnext(), so does the above code help much?

Also I
thought
filling tuple array (which basically read tuple into nowait mode) and
reading tuple
(into wait-mode) are two separate task - and if its into separate

function

that code
look more clear.

To me that looks slightly confusing.

If you have any suggestion for the function name
(fill_tuple_array)
then I am open to change that.

form_tuple_array (form_tuple is used at many places in code, so it
should look okay).

Ok, I rename it with next patch.

I think you might want to consider forming array
even for leader, although it might not be as beneficial as for
workers, OTOH, the code will get simplified if we do that way.

Yes, I did that earlier - and as you guessed its not be any beneficial
so to avoided extra memory allocation for the tuple array, I am not
forming array for leader.

Today, I observed another issue in code:

+gather_merge_init(GatherMergeState *gm_state)
{
..
+reread:
+ for (i = 0; i < nreaders + 1; i++)
+ {
+ if (TupIsNull(gm_state->gm_slots[i]) ||
+ gm_state->gm_slots[i]->tts_isempty)
+ {
+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+ {
+ binaryheap_add_unordered(gm_state->gm_heap,
+ Int32GetDatum(i));
+ }
+ }
+ else
+ fill_tuple_array(gm_state, i);
+ }
+ initialize = false;
+
+ for (i = 0; i < nreaders; i++)
+ if (TupIsNull(gm_state->gm_slots[i]) || gm_state->gm_slots[i]->tts_ise
mpty)
+ goto reread;
..
}

This code can cause infinite loop. Consider a case where one of the
worker doesn't get any tuple because by the time it starts all the
tuples are consumed by all other workers. The above code will keep on
looping to fetch the tuple from that worker whereas that worker can't
return any tuple. I think you can fix it by checking if the
particular queue has been exhausted.

Oh yes. I will work on the fix and soon submit the next set of patch.

Open Issue:

- Commit af33039317ddc4a0e38a02e2255c2bf453115fd2 fixed the leak

into

tqueue.c by
calling gather_readnext() into per-tuple context. Now for gather

merge

that
is
not possible, as we storing the tuple into Tuple array and we want

tuple

to
be
free only its get pass through the merge sort algorithm. One idea

is, we

can
also call gm_readnext_tuple() under per-tuple context (which will fix
the
leak
into tqueue.c) and then store the copy of tuple into tuple array.

Won't extra copy per tuple impact performance? Is the fix in
mentioned commit was for record or composite types, if so, does
GatherMerge support such types and if it support, does it provide any
benefit over Gather?

I don't think was specificially for the record or composite types - but

I

might be
wrong. As per my understanding commit fix leak into tqueue.c.

Hmm, in tqueue.c, I think the memory leak was remapping logic, refer
mail [1] of Tom (Just to add insult to injury, the backend's memory
consumption bloats to something over 5.5G during that last query).

Fix was to add
standard to call TupleQueueReaderNext() with shorter memory context - so
that
tqueue.c doesn't leak the memory.

I have idea to fix this by calling the TupleQueueReaderNext() with

per-tuple

context,
and then copy the tuple and store it to the tuple array and later with

the

next run of
ExecStoreTuple() will free the earlier tuple. I will work on that.

Okay, if you think that is viable, then you can do it, but do check
the performance impact of same, because extra copy per fetched tuple
can impact performance.

Sure, I will check the performance impact for the same.

[1] - /messages/by-id/32763.1469821037@sss
.pgh.pa.us

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Thanks,
Rushabh Lathia
www.EnterpriseDB.com

--
Rushabh Lathia

Attachments:

gather_merge_v3.patchapplication/x-download; name=gather_merge_v3.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..0f08649
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,721 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *	  routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ *		ExecInitGatherMerge		- initialize the GatherMerge node
+ *		ExecGatherMerge			- retrieve the next tuple from the node
+ *		ExecEndGatherMerge		- shut down the GatherMerge node
+ *		ExecReScanGatherMerge	- rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTuple
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTuple;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initilizing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	int			i;
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond isDone;
+	ExprContext *econtext;
+
+	/*
+	 * Initialize the parallel context and workers on first execution. We do
+	 * this on first execution rather than during node initialization, as it
+	 * needs to allocate large dynamic segment, so it is better to do if it is
+	 * really needed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize the workers required to execute Gather node. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate into gather merge */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull atleast single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple = (GMReaderTuple *) palloc0(sizeof(GMReaderTuple) * (gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the sort */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+	/*
+	 * First try to read tuple for each worker (including leader) into nowait
+	 * mode, so that we initialize read from each worker as well as leader.
+	 * After this if all active worker unable to produce the tuple, then
+	 * re-read and this time read the tuple into wait mode. For the worker,
+	 * which was able to produced single tuple in the earlier loop and still
+	 * active, just try fill the tuple array if more tuples available.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize ? false : true))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Function clear out a slot in the tuple table for each gather merge
+ * slots and returns the clear clear slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we no more need it */
+	pfree(gm_state->gm_tuple);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participate we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, true))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (gm_tuple->nTuples == gm_tuple->readCounter)
+		gm_tuple->nTuples = gm_tuple->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (gm_tuple->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = gm_tuple->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		gm_tuple->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+															  reader,
+															  false,
+														   &gm_tuple->done));
+		if (!HeapTupleIsValid(gm_tuple->tuple[i]))
+			break;
+		gm_tuple->nTuples++;
+	}
+}
+
+/*
+ * Function attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)
+{
+	HeapTuple	tup = NULL;
+
+	/* We here for leader? */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+	/* Does tuple array have any available tuples? */
+	else if (gm_state->gm_tuple[reader].nTuples >
+			 gm_state->gm_tuple[reader].readCounter)
+	{
+		GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];
+
+		tup = gm_tuple->tuple[gm_tuple->readCounter++];
+	}
+	/* reader exhausted? */
+	else if (gm_state->gm_tuple[reader].done)
+	{
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   force,
+										  &gm_state->gm_tuple[reader].done));
+
+		/*
+		 * try to read more tuple into nowait mode and store it into the tuple
+		 * array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool force, bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, force ? false : true, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..5dea0f7 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,18 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+	WRITE_BOOL_FIELD(single_copy);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3363,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3649,6 +3693,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..5dbb83e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,70 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = path->subpath->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Avoid log(0)...
+	 */
+	N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost. For Gather Merge, require tuple
+	 * to be read into wait mode from each worker, so considering some extra
+	 * cost for the same.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..d4fea89 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,11 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+											 GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+									  int nworkers, bool single_copy,
+									  Plan *subplan);
 
 
 /*
@@ -463,6 +468,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+												(GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -2246,6 +2255,90 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
+/*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	gm_plan = make_gather_merge(subplan->targetlist,
+								NIL,
+								best_path->num_workers,
+								best_path->single_copy,
+								subplan);
+
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	if (pathkeys)
+	{
+		/* Compute sort column info, and adjust GatherMerge tlist as needed */
+		(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+										  best_path->path.parent->relids,
+										  NULL,
+										  true,
+										  &gm_plan->numCols,
+										  &gm_plan->sortColIdx,
+										  &gm_plan->sortOperators,
+										  &gm_plan->collations,
+										  &gm_plan->nullsFirst);
+
+
+		/* Compute sort column info, and adjust subplan's tlist as needed */
+		subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+											 best_path->subpath->parent->relids,
+											 gm_plan->sortColIdx,
+											 false,
+											 &numsortkeys,
+											 &sortColIdx,
+											 &sortOperators,
+											 &collations,
+											 &nullsFirst);
+
+		/*
+		 * Check that we got the same sort key information.  We just Assert
+		 * that the sortops match, since those depend only on the pathkeys;
+		 * but it seems like a good idea to check the sort column numbers
+		 * explicitly, to ensure the tlists really do match up.
+		 */
+		Assert(numsortkeys == gm_plan->numCols);
+		if (memcmp(sortColIdx, gm_plan->sortColIdx,
+				   numsortkeys * sizeof(AttrNumber)) != 0)
+			elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+		Assert(memcmp(sortOperators, gm_plan->sortOperators,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(collations, gm_plan->collations,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+					  numsortkeys * sizeof(bool)) == 0);
+
+		/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+		if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+			subplan = (Plan *) make_sort(subplan, numsortkeys,
+										 sortColIdx, sortOperators,
+										 collations, nullsFirst);
+
+		gm_plan->plan.lefttree = subplan;
+	}
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
 
 /*****************************************************************************
  *
@@ -5909,6 +6002,26 @@ make_gather(List *qptlist,
 	return node;
 }
 
+static GatherMerge *
+make_gather_merge(List *qptlist,
+				  List *qpqual,
+				  int nworkers,
+				  bool single_copy,
+				  Plan *subplan)
+{
+	GatherMerge	*node = makeNode(GatherMerge);
+	Plan		*plan = &node->plan;
+
+	/* cost should be inserted by caller */
+	plan->targetlist = qptlist;
+	plan->qual = qpqual;
+	plan->lefttree = subplan;
+	plan->righttree = NULL;
+	node->num_workers = nworkers;
+
+	return node;
+}
+
 /*
  * distinctList is a list of SortGroupClauses, identifying the targetlist
  * items that should be considered by the SetOp filter.  The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 644b8b6..0325c53 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,59 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path. We generate a Gather path based on the cheapest partial path,
+		 * and a GatherMerge path for each partial path that is properly sorted.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) linitial(grouped_rel->partial_pathlist);
 			double		total_groups = path->rows * path->parallel_workers;
 
+			/*
+			 * GatherMerge is always sorted, so if there is GROUP BY clause,
+			 * try to generate GatherMerge path for each partial path.
+			 */
+			if (parse->groupClause)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *gmpath = (Path *) lfirst(lc);
+
+					if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+						continue;
+
+					/* create gather merge path */
+					gmpath = (Path *) create_gather_merge_path(root,
+															   grouped_rel,
+															   gmpath,
+															   NULL,
+															   root->group_pathkeys,
+															   NULL);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+												 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								create_group_path(root,
+												  grouped_rel,
+												  gmpath,
+												  target,
+												  parse->groupClause,
+												  (List *) parse->havingQual,
+												  dNumGroups));
+				}
+			}
+
 			path = (Path *) create_gather_path(root,
 											   grouped_rel,
 											   path,
@@ -3870,6 +3915,12 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * Partial pathlist generated for grouped relation are no further useful,
+	 * so just reset it with null.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4166,6 +4217,36 @@ create_distinct_paths(PlannerInfo *root,
 			}
 		}
 
+		/*
+		 * Generate GatherMerge path for each partial path.
+		 */
+		foreach(lc, input_rel->partial_pathlist)
+		{
+			Path	   *path = (Path *) lfirst(lc);
+
+			if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+			{
+				path = (Path *) create_sort_path(root, distinct_rel,
+												 path,
+												 needed_pathkeys,
+												 -1.0);
+			}
+
+			/* create gather merge path */
+			path = (Path *) create_gather_merge_path(root,
+													 distinct_rel,
+													 path,
+													 NULL,
+													 needed_pathkeys,
+													 NULL);
+			add_path(distinct_rel, (Path *)
+					 create_upper_unique_path(root,
+											  distinct_rel,
+											  path,
+											  list_length(root->distinct_pathkeys),
+											  numDistinctRows));
+		}
+
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
@@ -4310,6 +4391,39 @@ create_ordered_paths(PlannerInfo *root,
 	ordered_rel->useridiscurrent = input_rel->useridiscurrent;
 	ordered_rel->fdwroutine = input_rel->fdwroutine;
 
+	foreach(lc, input_rel->partial_pathlist)
+	{
+		Path	   *path = (Path *) lfirst(lc);
+		bool		is_sorted;
+
+		is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+										  path->pathkeys);
+		if (!is_sorted)
+		{
+			/* An explicit sort here can take advantage of LIMIT */
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+		}
+
+		/* create gather merge path */
+		path = (Path *) create_gather_merge_path(root,
+												 ordered_rel,
+												 path,
+												 target,
+												 root->sort_pathkeys,
+												 NULL);
+
+		/* Add projection step if needed */
+		if (path->pathtarget != target)
+			path = apply_projection_to_path(root, ordered_rel,
+											path, target);
+
+		add_path(ordered_rel, path);
+	}
+
 	foreach(lc, input_rel->pathlist)
 	{
 		Path	   *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..822fca2 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..f605284 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..279f468 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTuple *gm_tuple;	/* array of lenght nreaders + leader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..dfaca79 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. FIXME: comments
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+	bool		single_copy;	/* path must not be executed >1x */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..cd48cc4 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,8 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,10 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
 						  List **restrict_clauses);
 extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
 							Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel, Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer);
 
 #endif   /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#10Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Rushabh Lathia (#9)
Re: Gather Merge

On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Please find attached latest patch which fix the review point as well as
additional clean-up.

I've signed up to review this patch and I'm planning to do some
testing. Here's some initial feedback after a quick read-through:

+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))

Clunky ternary operator... how about "!initialize".

+/*
+ * Function clear out a slot in the tuple table for each gather merge
+ * slots and returns the clear clear slot.
+ */

Maybe better like this? "_Clear_ out a slot in the tuple table for
each gather merge _slot_ and _return_ the _cleared_ slot."

+ /* Free tuple array as we no more need it */

"... as we don't need it any more"

+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */

"_Fetch_ the sorted tuple out of the heap."

+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing

s/participate/participant/

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?

+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"

Not correctly sorted.

+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initilizing the gather merge slots.
+ */

s/initilizing/initializing/

+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------

The convention in Postgres code seems to be to use a form like "Free
any storage ..." in function documentation. Not sure if that's an
imperative, an infinitive, or if the word "we" is omitted since
English is so fuzzy like that, but it's inconsistent with other
documentation to use "frees" here. Oh, I see that exact wording is in
several other files. I guess I'll just leave this as a complaint
about all those files then :-)

+ * Pull atleast single tuple from each worker + leader and set up the heap.

s/atleast single/at least a single/

+ * Read the tuple for given reader into nowait mode, and form the tuple array.

s/ into / in /

+ * Function attempt to read tuple for the given reader and store it into reader

s/Function attempt /Attempt /

+ * Function returns true if found tuple for the reader, otherwise returns

s/Function returns /Return /

+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.

I wonder if it would be good to standardise on the terminology we use
when we mean workers AND the leader. In my Parallel Shared Hash work,
I've been saying 'participants' if I mean = workers + leader. What do
you think?

+ * After this if all active worker unable to produce the tuple, then
+ * re-read and this time read the tuple into wait mode. For the worker,
+ * which was able to produced single tuple in the earlier loop and still
+ * active, just try fill the tuple array if more tuples available.
+ */

How about this? "After this, if all active workers are unable to
produce a tuple, then re-read and this time us wait mode. For workers
that were able to produce a tuple in the earlier loop and are still
active, just try to fill the tuple array if more tuples are
available."

+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simple then cost_sort.

s/much simple then/much simpler than/

+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
...
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

Also, shouldn't we add 1 to N to account for the leader? Suppose
there are 2 workers. There are 3 elements in the binary heap. The
element to be sifted down must be compared against either 1 or 2
others to reorganise the heap. Surely in that case we should estimate
log2(3) = ~1.58 comparisons, not log2(2) = 1 comparison.

I suspect that the leader's contribution will be equivalent to a whole
worker if the plan involves a sort: as soon as the leader pulls a
tuple in gather_merge_init, the sort node will pull all the tuples it
can in a tight loop. It's unfortunate that cost_seqscan has to
estimate what the leader's contribution will be without knowing
whether it has a "greedy" high-startup-cost consumer like a sort or
hash node where the leader will contribute a whole backend's full
attention as soon as it executes the plan, or a lazy consumer where
the leader will probably not contribute much if there are enough
workers to keep it distracted. In the case of a Gather Merge -> Sort
-> Parallel Seq Scan plan, I think we will overestimate the number of
rows (per participant), because cost_seqscan will guess that the
leader is spending 30% of its time per worker servicing the workers,
when in fact it will be sucking tuples into a sort node just as fast
as anyone else. But I don't see what this patch can do about that...

+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier tuple, so
+ * this require tuple to be read into wait mode. During initialization phase,
+ * once we try to read the tuple into no-wait mode as we want to initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)

s/into wait mode/in wait mode/

This appears throughout the comments; not sure if I can explain this
well but "in wait mode" describes a state of being which is wanted
here, "into wait mode" describes some kind of change or movement or
insertion.

Perhaps it would be better to say "reads the tuple _queue_ in wait
mode", just to make clearer that this is talking about the wait/nowait
feature of tuple queues, and perhaps also note that the leader always
waits since it executes the plan.

Maybe we should use "bool nowait" here anway, mirroring the TupleQueue
interface? Why introduce another terminology for the same thing with
inverted sense?

+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)

This function is stangely named. How about try_to_fill_tuple_buffer
or something?

+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];

I wonder if the purpose of gm_tuple, would be clearer if it were
called gm_tuple_buffers. Plural because it holds one buffer per
reader. Then in that variable on the left hand side there could be
called tuple_buffer (singular), because it's the buffer of tuples for
one single reader.

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Michael Paquier
michael.paquier@gmail.com
In reply to: Thomas Munro (#10)
Re: Gather Merge

On Fri, Nov 4, 2016 at 12:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?

If the new file contains a portion of code from this age, yes. If
that's something completely new using only PGDG is fine. At least
that's what I can conclude by looking at git log -p and search for
"new file mode".
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Robert Haas
robertmhaas@gmail.com
In reply to: Thomas Munro (#10)
Re: Gather Merge

On Thu, Nov 3, 2016 at 11:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
...
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append, and the header comments threreto.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Michael Paquier (#11)
Re: Gather Merge

Michael Paquier <michael.paquier@gmail.com> writes:

On Fri, Nov 4, 2016 at 12:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?

If the new file contains a portion of code from this age, yes.

My habit has been to include the whole old copyright if there's anything
at all in the new file that could be considered to be copy-and-paste from
an existing file. Frequently it's a gray area.

Legally, I doubt anyone cares much. Morally, I see it as paying due
respect to those who came before us in this project.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Robert Haas (#12)
Re: Gather Merge

On Sat, Nov 5, 2016 at 1:55 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Nov 3, 2016 at 11:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
...
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append, and the header comments threreto.

I see. So commit 7a2fe9bd got rid of the delete/insert code
(heap_siftup_slot and heap_insert_slot) and introduced
binaryheap_replace_first which does it in one step, but the costing
wasn't adjusted and still thinks we pay comparison_cost * logN twice.

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Tom Lane (#13)
Re: Gather Merge

On Sat, Nov 5, 2016 at 2:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Michael Paquier <michael.paquier@gmail.com> writes:

On Fri, Nov 4, 2016 at 12:00 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"? Are we supposed to be blaming the University of California
for new files?

If the new file contains a portion of code from this age, yes.

My habit has been to include the whole old copyright if there's anything
at all in the new file that could be considered to be copy-and-paste from
an existing file. Frequently it's a gray area.

Thanks. I see that it's warranted in this case, as code is recycled
from MergeAppend.

Legally, I doubt anyone cares much. Morally, I see it as paying due
respect to those who came before us in this project.

+1

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Amit Kapila
amit.kapila16@gmail.com
In reply to: Thomas Munro (#10)
Re: Gather Merge

On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Please find attached latest patch which fix the review point as well as
additional clean-up.

+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)

This function is stangely named. How about try_to_fill_tuple_buffer
or something?

Hmm. We have discussed upthread to name it as form_tuple_array. Now,
you feel that is also not good, I think it is basically matter of
perspective, so why not leave it as it is for now and we will come
back to naming it towards end of patch review or may be we can leave
it for committer to decide.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Thomas Munro (#10)
Re: Gather Merge

On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <thomas.munro@enterprisedb.com>
wrote:

On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Please find attached latest patch which fix the review point as well as
additional clean-up.

I've signed up to review this patch and I'm planning to do some
testing. Here's some initial feedback after a quick read-through:

Thanks Thomas.

+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))

Clunky ternary operator... how about "!initialize".

Fixed.

+/*
+ * Function clear out a slot in the tuple table for each gather merge
+ * slots and returns the clear clear slot.
+ */

Maybe better like this? "_Clear_ out a slot in the tuple table for
each gather merge _slot_ and _return_ the _cleared_ slot."

Fixed.

+ /* Free tuple array as we no more need it */

"... as we don't need it any more"

Fixed

+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */

"_Fetch_ the sorted tuple out of the heap."

Fixed

+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing

s/participate/participant/

Fixed.

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?

Fixed.

Are we supposed to be blaming the University of California
for new files?

Not quite sure about this, so keeping this as it is.

+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"

Not correctly sorted.

Copied from nodeGather.c. But Fixed here.

+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initilizing the gather merge slots.
+ */

s/initilizing/initializing/

Fixed.

+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------

The convention in Postgres code seems to be to use a form like "Free
any storage ..." in function documentation. Not sure if that's an
imperative, an infinitive, or if the word "we" is omitted since
English is so fuzzy like that, but it's inconsistent with other
documentation to use "frees" here. Oh, I see that exact wording is in
several other files. I guess I'll just leave this as a complaint
about all those files then :-)

Sure.

+ * Pull atleast single tuple from each worker + leader and set up the
heap.

s/atleast single/at least a single/

Fixed.

+ * Read the tuple for given reader into nowait mode, and form the tuple
array.

s/ into / in /

Fixed.

+ * Function attempt to read tuple for the given reader and store it into
reader

s/Function attempt /Attempt /

Fixed.

+ * Function returns true if found tuple for the reader, otherwise returns

s/Function returns /Return /

Fixed.

+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.

I wonder if it would be good to standardise on the terminology we use
when we mean workers AND the leader. In my Parallel Shared Hash work,
I've been saying 'participants' if I mean = workers + leader. What do
you think?

I am not quite sure about participants. In my options when we explicitly
say workers + leader its more clear. I am open to change is if committer
thinks otherwise.

+ * After this if all active worker unable to produce the tuple, then
+ * re-read and this time read the tuple into wait mode. For the worker,
+ * which was able to produced single tuple in the earlier loop and still
+ * active, just try fill the tuple array if more tuples available.
+ */

How about this? "After this, if all active workers are unable to
produce a tuple, then re-read and this time us wait mode. For workers
that were able to produce a tuple in the earlier loop and are still
active, just try to fill the tuple array if more tuples are
available."

Fixed.

+ * The heap is never spilled to disk, since we assume N is not very
large. So
+ * this is much simple then cost_sort.

s/much simple then/much simpler than/

Fixed.

+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
...
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append.

Also, shouldn't we add 1 to N to account for the leader? Suppose
there are 2 workers. There are 3 elements in the binary heap. The
element to be sifted down must be compared against either 1 or 2
others to reorganise the heap. Surely in that case we should estimate
log2(3) = ~1.58 comparisons, not log2(2) = 1 comparison.

Yes, good catch. For Gather Merge leader always participate, so
we should num_workers + 1.

I suspect that the leader's contribution will be equivalent to a whole
worker if the plan involves a sort: as soon as the leader pulls a
tuple in gather_merge_init, the sort node will pull all the tuples it
can in a tight loop. It's unfortunate that cost_seqscan has to
estimate what the leader's contribution will be without knowing
whether it has a "greedy" high-startup-cost consumer like a sort or
hash node where the leader will contribute a whole backend's full
attention as soon as it executes the plan, or a lazy consumer where
the leader will probably not contribute much if there are enough
workers to keep it distracted. In the case of a Gather Merge -> Sort
-> Parallel Seq Scan plan, I think we will overestimate the number of
rows (per participant), because cost_seqscan will guess that the
leader is spending 30% of its time per worker servicing the workers,
when in fact it will be sucking tuples into a sort node just as fast
as anyone else. But I don't see what this patch can do about that...

Exactly. There is very thin line - when it comes to calculating the cost.
In general, while calculating the cost for GM, I just tried to be similar
to the Gather + MergeAppend.

+ * When force is true, function reads the tuple into wait mode. For gather
+ * merge we need to fill the slot from which we returned the earlier
tuple, so
+ * this require tuple to be read into wait mode. During initialization
phase,
+ * once we try to read the tuple into no-wait mode as we want to
initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool force)

s/into wait mode/in wait mode/

This appears throughout the comments; not sure if I can explain this
well but "in wait mode" describes a state of being which is wanted
here, "into wait mode" describes some kind of change or movement or
insertion.

Perhaps it would be better to say "reads the tuple _queue_ in wait
mode", just to make clearer that this is talking about the wait/nowait
feature of tuple queues, and perhaps also note that the leader always
waits since it executes the plan.

Fixed. Just choose to s/into wait mode/in wait mode/

Maybe we should use "bool nowait" here anway, mirroring the TupleQueue

interface? Why introduce another terminology for the same thing with
inverted sense?

Agree with you. Changed the function gm_readnext_tuple() &
gather_merge_readnext()
APIs.

+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple
array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)

This function is stangely named. How about try_to_fill_tuple_buffer
or something?

+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];

I wonder if the purpose of gm_tuple, would be clearer if it were
called gm_tuple_buffers. Plural because it holds one buffer per
reader. Then in that variable on the left hand side there could be
called tuple_buffer (singular), because it's the buffer of tuples for
one single reader.

Yes, you are right. I renamed the variable as well as structure.

PFA latest patch which address the review comments as
well as few other clean ups.

Apart from this my colleague Rafia Sabih reported one regression with
GM. Which was like, if we set work_mem enough to accommodate the
sort operation - in such case GM path get select even though Sort
performs much better.

Example:

create table t (i int);
insert into t values(generate_series(1,10000000));
set work_mem =1024000;
explain analyze select * from t order by i;
set enable_gathermerge =off;
explain analyze select * from t order by i;

postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=335916.26..648415.76 rows=2499996 width=4)
(actual time=2234.145..7628.555 rows=10000000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=334916.22..341166.21 rows=2499996 width=4) (actual
time=2226.609..2611.041 rows=2000000 loops=5)
Sort Key: i
Sort Method: quicksort Memory: 147669kB
-> Parallel Seq Scan on t (cost=0.00..69247.96 rows=2499996
width=4) (actual time=0.034..323.129 rows=2000000 loops=5)
Planning time: 0.061 ms
Execution time: 8143.809 ms
(9 rows)

postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Sort (cost=1306920.83..1331920.79 rows=9999985 width=4) (actual
time=3521.143..4854.148 rows=10000000 loops=1)
Sort Key: i
Sort Method: quicksort Memory: 854075kB
-> Seq Scan on t (cost=0.00..144247.85 rows=9999985 width=4)
(actual time=0.113..1340.758 rows=10000000 loops=1)
Planning time: 0.100 ms
Execution time: 5535.560 ms
(6 rows)

Looking at the plan I realize that this is happening because wrong costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.

Thanks,
Rushabh Lathia
www.EnterpriseDB.com

#18Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#17)
1 attachment(s)
Re: Gather Merge

Oops forgot to attach latest patch in the earlier mail.

On Fri, Nov 11, 2016 at 6:26 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <
thomas.munro@enterprisedb.com> wrote:

On Thu, Oct 27, 2016 at 10:50 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Please find attached latest patch which fix the review point as well as
additional clean-up.

I've signed up to review this patch and I'm planning to do some
testing. Here's some initial feedback after a quick read-through:

Thanks Thomas.

+ if (gather_merge_readnext(gm_state, i, initialize ? false : true))

Clunky ternary operator... how about "!initialize".

Fixed.

+/*
+ * Function clear out a slot in the tuple table for each gather merge
+ * slots and returns the clear clear slot.
+ */

Maybe better like this? "_Clear_ out a slot in the tuple table for
each gather merge _slot_ and _return_ the _cleared_ slot."

Fixed.

+ /* Free tuple array as we no more need it */

"... as we don't need it any more"

Fixed

+/*
+ * Read the next tuple for gather merge.
+ *
+ * Function fetch the sorted tuple out of the heap.
+ */

"_Fetch_ the sorted tuple out of the heap."

Fixed

+ * Otherwise, pull the next tuple from whichever participate we
+ * returned from last time, and reinsert the index into the heap,
+ * because it might now compare differently against the existing

s/participate/participant/

Fixed.

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?

Fixed.

Are we supposed to be blaming the University of California
for new files?

Not quite sure about this, so keeping this as it is.

+#include "executor/tqueue.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+#include "lib/binaryheap.h"

Not correctly sorted.

Copied from nodeGather.c. But Fixed here.

+ /*
+ * store the tuple descriptor into gather merge state, so we can use it
+ * later while initilizing the gather merge slots.
+ */

s/initilizing/initializing/

Fixed.

+/* ----------------------------------------------------------------
+ * ExecEndGatherMerge
+ *
+ * frees any storage allocated through C routines.
+ * ----------------------------------------------------------------

The convention in Postgres code seems to be to use a form like "Free
any storage ..." in function documentation. Not sure if that's an
imperative, an infinitive, or if the word "we" is omitted since
English is so fuzzy like that, but it's inconsistent with other
documentation to use "frees" here. Oh, I see that exact wording is in
several other files. I guess I'll just leave this as a complaint
about all those files then :-)

Sure.

+ * Pull atleast single tuple from each worker + leader and set up the
heap.

s/atleast single/at least a single/

Fixed.

+ * Read the tuple for given reader into nowait mode, and form the tuple
array.

s/ into / in /

Fixed.

+ * Function attempt to read tuple for the given reader and store it into
reader

s/Function attempt /Attempt /

Fixed.

+ * Function returns true if found tuple for the reader, otherwise returns

s/Function returns /Return /

Fixed.

+ * First try to read tuple for each worker (including leader) into nowait
+ * mode, so that we initialize read from each worker as well as leader.

I wonder if it would be good to standardise on the terminology we use
when we mean workers AND the leader. In my Parallel Shared Hash work,
I've been saying 'participants' if I mean = workers + leader. What do
you think?

I am not quite sure about participants. In my options when we explicitly
say workers + leader its more clear. I am open to change is if committer
thinks otherwise.

+ * After this if all active worker unable to produce the tuple, then
+ * re-read and this time read the tuple into wait mode. For the worker,
+ * which was able to produced single tuple in the earlier loop and still
+ * active, just try fill the tuple array if more tuples available.
+ */

How about this? "After this, if all active workers are unable to
produce a tuple, then re-read and this time us wait mode. For workers
that were able to produce a tuple in the earlier loop and are still
active, just try to fill the tuple array if more tuples are
available."

Fixed.

+ * The heap is never spilled to disk, since we assume N is not very
large. So
+ * this is much simple then cost_sort.

s/much simple then/much simpler than/

Fixed.

+ /*
+ * Avoid log(0)...
+ */
+ N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers;
+ logN = LOG2(N);
...
+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append.

Also, shouldn't we add 1 to N to account for the leader? Suppose
there are 2 workers. There are 3 elements in the binary heap. The
element to be sifted down must be compared against either 1 or 2
others to reorganise the heap. Surely in that case we should estimate
log2(3) = ~1.58 comparisons, not log2(2) = 1 comparison.

Yes, good catch. For Gather Merge leader always participate, so
we should num_workers + 1.

I suspect that the leader's contribution will be equivalent to a whole
worker if the plan involves a sort: as soon as the leader pulls a
tuple in gather_merge_init, the sort node will pull all the tuples it
can in a tight loop. It's unfortunate that cost_seqscan has to
estimate what the leader's contribution will be without knowing
whether it has a "greedy" high-startup-cost consumer like a sort or
hash node where the leader will contribute a whole backend's full
attention as soon as it executes the plan, or a lazy consumer where
the leader will probably not contribute much if there are enough
workers to keep it distracted. In the case of a Gather Merge -> Sort
-> Parallel Seq Scan plan, I think we will overestimate the number of
rows (per participant), because cost_seqscan will guess that the
leader is spending 30% of its time per worker servicing the workers,
when in fact it will be sucking tuples into a sort node just as fast
as anyone else. But I don't see what this patch can do about that...

Exactly. There is very thin line - when it comes to calculating the cost.
In general, while calculating the cost for GM, I just tried to be similar
to the Gather + MergeAppend.

+ * When force is true, function reads the tuple into wait mode. For
gather
+ * merge we need to fill the slot from which we returned the earlier
tuple, so
+ * this require tuple to be read into wait mode. During initialization
phase,
+ * once we try to read the tuple into no-wait mode as we want to
initialize all
+ * the readers. Refer gather_merge_init() for more details.
+ *
+ * Function returns true if found tuple for the reader, otherwise returns
+ * false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool
force)

s/into wait mode/in wait mode/

This appears throughout the comments; not sure if I can explain this
well but "in wait mode" describes a state of being which is wanted
here, "into wait mode" describes some kind of change or movement or
insertion.

Perhaps it would be better to say "reads the tuple _queue_ in wait
mode", just to make clearer that this is talking about the wait/nowait
feature of tuple queues, and perhaps also note that the leader always
waits since it executes the plan.

Fixed. Just choose to s/into wait mode/in wait mode/

Maybe we should use "bool nowait" here anway, mirroring the TupleQueue

interface? Why introduce another terminology for the same thing with
inverted sense?

Agree with you. Changed the function gm_readnext_tuple() &
gather_merge_readnext()
APIs.

+/*
+ * Read the tuple for given reader into nowait mode, and form the tuple
array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)

This function is stangely named. How about try_to_fill_tuple_buffer
or something?

+ GMReaderTuple *gm_tuple = &gm_state->gm_tuple[reader];

I wonder if the purpose of gm_tuple, would be clearer if it were
called gm_tuple_buffers. Plural because it holds one buffer per
reader. Then in that variable on the left hand side there could be
called tuple_buffer (singular), because it's the buffer of tuples for
one single reader.

Yes, you are right. I renamed the variable as well as structure.

PFA latest patch which address the review comments as
well as few other clean ups.

Apart from this my colleague Rafia Sabih reported one regression with
GM. Which was like, if we set work_mem enough to accommodate the
sort operation - in such case GM path get select even though Sort
performs much better.

Example:

create table t (i int);
insert into t values(generate_series(1,10000000));
set work_mem =1024000;
explain analyze select * from t order by i;
set enable_gathermerge =off;
explain analyze select * from t order by i;

postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=335916.26..648415.76 rows=2499996 width=4) (actual time=2234.145..7628.555 rows=10000000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=334916.22..341166.21 rows=2499996 width=4) (actual time=2226.609..2611.041 rows=2000000 loops=5)
Sort Key: i
Sort Method: quicksort Memory: 147669kB
-> Parallel Seq Scan on t (cost=0.00..69247.96 rows=2499996 width=4) (actual time=0.034..323.129 rows=2000000 loops=5)
Planning time: 0.061 ms
Execution time: 8143.809 ms
(9 rows)

postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from t order by i;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Sort (cost=1306920.83..1331920.79 rows=9999985 width=4) (actual time=3521.143..4854.148 rows=10000000 loops=1)
Sort Key: i
Sort Method: quicksort Memory: 854075kB
-> Seq Scan on t (cost=0.00..144247.85 rows=9999985 width=4) (actual time=0.113..1340.758 rows=10000000 loops=1)
Planning time: 0.100 ms
Execution time: 5535.560 ms
(6 rows)

Looking at the plan I realize that this is happening because wrong costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.

Thanks,
Rushabh Lathia
www.EnterpriseDB.com

--
Rushabh Lathia

Attachments:

gather_merge_v4.patchapplication/x-download; name=gather_merge_v4.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..4b6410b
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,723 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *	  routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ *		ExecInitGatherMerge		- initialize the GatherMerge node
+ *		ExecGatherMerge			- retrieve the next tuple from the node
+ *		ExecEndGatherMerge		- shut down the GatherMerge node
+ *		ExecReScanGatherMerge	- rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	int			i;
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond isDone;
+	ExprContext *econtext;
+
+	/*
+	 * Initialize the parallel context and workers on first execution. We do
+	 * this on first execution rather than during node initialization, as it
+	 * needs to allocate large dynamic segment, so it is better to do if it is
+	 * really needed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize the workers required to execute Gather node. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate into gather merge */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * (gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the sort */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+	/*
+	 * First try to read tuple for each worker (including leader) in nowait
+	 * mode, so that we initialize read from each worker as well as leader.
+	 * After this, if all active workers are unable to produce a tuple, then
+	 * re-read and this time use wait mode. For workers that were able to
+	 * produce a tuple in the earlier loop and are still active, just try to
+	 * fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * For gather merge we need to fill the slot from which we returned the earlier
+ * tuple, so this require tuple to be read in wait mode. During initialization
+ * phase, once we try to read the tuple in no-wait mode as we want to
+ * initialize all the readers. Refer gather_merge_init() for more details.
+ *
+ * Return true if found tuple for the reader, otherwise returns false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	HeapTuple	tup = NULL;
+
+	/* We here for leader? */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+	/* Does tuple array have any available tuples? */
+	else if (gm_state->gm_tuple_buffers[reader].nTuples >
+			 gm_state->gm_tuple_buffers[reader].readCounter)
+	{
+		GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	/* reader exhausted? */
+	else if (gm_state->gm_tuple_buffers[reader].done)
+	{
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * try to read more tuple in nowait mode and store it into the tuple
+		 * array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 71714bc..8b92c1a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4343,6 +4368,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index ae86954..8a49801 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3322,6 +3362,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3649,6 +3692,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 2a49639..53ca09d 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,75 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simpler then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Consider leader as it always participate into gather merge scan.
+	 * Avoid log(0)...
+	 */
+	N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost. For Gather Merge, require tuple
+	 * to be read in wait mode from each worker, so considering some extra
+	 * cost for the same.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..5fdc1bd 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,10 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+											 GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+									  int nworkers, Plan *subplan);
 
 
 /*
@@ -463,6 +467,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+												(GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -2246,6 +2254,89 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
+/*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	gm_plan = make_gather_merge(subplan->targetlist,
+								NIL,
+								best_path->num_workers,
+								subplan);
+
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	if (pathkeys)
+	{
+		/* Compute sort column info, and adjust GatherMerge tlist as needed */
+		(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+										  best_path->path.parent->relids,
+										  NULL,
+										  true,
+										  &gm_plan->numCols,
+										  &gm_plan->sortColIdx,
+										  &gm_plan->sortOperators,
+										  &gm_plan->collations,
+										  &gm_plan->nullsFirst);
+
+
+		/* Compute sort column info, and adjust subplan's tlist as needed */
+		subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+											 best_path->subpath->parent->relids,
+											 gm_plan->sortColIdx,
+											 false,
+											 &numsortkeys,
+											 &sortColIdx,
+											 &sortOperators,
+											 &collations,
+											 &nullsFirst);
+
+		/*
+		 * Check that we got the same sort key information.  We just Assert
+		 * that the sortops match, since those depend only on the pathkeys;
+		 * but it seems like a good idea to check the sort column numbers
+		 * explicitly, to ensure the tlists really do match up.
+		 */
+		Assert(numsortkeys == gm_plan->numCols);
+		if (memcmp(sortColIdx, gm_plan->sortColIdx,
+				   numsortkeys * sizeof(AttrNumber)) != 0)
+			elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+		Assert(memcmp(sortOperators, gm_plan->sortOperators,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(collations, gm_plan->collations,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+					  numsortkeys * sizeof(bool)) == 0);
+
+		/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+		if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+			subplan = (Plan *) make_sort(subplan, numsortkeys,
+										 sortColIdx, sortOperators,
+										 collations, nullsFirst);
+
+		gm_plan->plan.lefttree = subplan;
+	}
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
 
 /*****************************************************************************
  *
@@ -5909,6 +6000,25 @@ make_gather(List *qptlist,
 	return node;
 }
 
+static GatherMerge *
+make_gather_merge(List *qptlist,
+				  List *qpqual,
+				  int nworkers,
+				  Plan *subplan)
+{
+	GatherMerge	*node = makeNode(GatherMerge);
+	Plan		*plan = &node->plan;
+
+	/* cost should be inserted by caller */
+	plan->targetlist = qptlist;
+	plan->qual = qpqual;
+	plan->lefttree = subplan;
+	plan->righttree = NULL;
+	node->num_workers = nworkers;
+
+	return node;
+}
+
 /*
  * distinctList is a list of SortGroupClauses, identifying the targetlist
  * items that should be considered by the SetOp filter.  The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 644b8b6..ea86c09 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,61 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path. We generate a Gather path based on the cheapest partial path,
+		 * and a GatherMerge path for each partial path that is properly sorted.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) linitial(grouped_rel->partial_pathlist);
 			double		total_groups = path->rows * path->parallel_workers;
 
+			/*
+			 * GatherMerge is always sorted, so if there is GROUP BY clause,
+			 * try to generate GatherMerge path for each partial path.
+			 */
+			if (parse->groupClause)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *gmpath = (Path *) lfirst(lc);
+					double		total_groups = gmpath->rows * gmpath->parallel_workers;
+
+					if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+						continue;
+
+					/* create gather merge path */
+					gmpath = (Path *) create_gather_merge_path(root,
+															   grouped_rel,
+															   gmpath,
+															   NULL,
+															   root->group_pathkeys,
+															   NULL,
+															   &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+												 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								create_group_path(root,
+												  grouped_rel,
+												  gmpath,
+												  target,
+												  parse->groupClause,
+												  (List *) parse->havingQual,
+												  dNumGroups));
+				}
+			}
+
 			path = (Path *) create_gather_path(root,
 											   grouped_rel,
 											   path,
@@ -3870,6 +3917,12 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * Partial pathlist generated for grouped relation are no further useful,
+	 * so just reset it with null.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4166,6 +4219,38 @@ create_distinct_paths(PlannerInfo *root,
 			}
 		}
 
+		/*
+		 * Generate GatherMerge path for each partial path.
+		 */
+		foreach(lc, input_rel->partial_pathlist)
+		{
+			Path	   *path = (Path *) lfirst(lc);
+			double		total_groups = path->rows * path->parallel_workers;
+
+			if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+			{
+				path = (Path *) create_sort_path(root, distinct_rel,
+												 path,
+												 needed_pathkeys,
+												 -1.0);
+			}
+
+			/* create gather merge path */
+			path = (Path *) create_gather_merge_path(root,
+													 distinct_rel,
+													 path,
+													 NULL,
+													 needed_pathkeys,
+													 NULL,
+													 &total_groups);
+			add_path(distinct_rel, (Path *)
+					 create_upper_unique_path(root,
+											  distinct_rel,
+											  path,
+											  list_length(root->distinct_pathkeys),
+											  numDistinctRows));
+		}
+
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
@@ -4310,6 +4395,41 @@ create_ordered_paths(PlannerInfo *root,
 	ordered_rel->useridiscurrent = input_rel->useridiscurrent;
 	ordered_rel->fdwroutine = input_rel->fdwroutine;
 
+	foreach(lc, input_rel->partial_pathlist)
+	{
+		Path	   *path = (Path *) lfirst(lc);
+		bool		is_sorted;
+		double		total_groups = path->rows * path->parallel_workers;
+
+		is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+										  path->pathkeys);
+		if (!is_sorted)
+		{
+			/* An explicit sort here can take advantage of LIMIT */
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+		}
+
+		/* create gather merge path */
+		path = (Path *) create_gather_merge_path(root,
+												 ordered_rel,
+												 path,
+												 target,
+												 root->sort_pathkeys,
+												 NULL,
+												 &total_groups);
+
+		/* Add projection step if needed */
+		if (path->pathtarget != target)
+			path = apply_projection_to_path(root, ordered_rel,
+											path, target);
+
+		add_path(ordered_rel, path);
+	}
+
 	foreach(lc, input_rel->pathlist)
 	{
 		Path	   *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d14db7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..07e1532 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..f605284 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..0c12e27 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;	/* tuple buffer per reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 88297bb..edfb917 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..e9795f9 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..e986896 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..1df5861 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,11 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
 						  List **restrict_clauses);
 extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
 							Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel, Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 
 #endif   /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#19Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Rushabh Lathia (#17)
Re: Gather Merge

On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <thomas.munro@enterprisedb.com>
wrote:

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?

Fixed.

The year also needs updating to 2016 in nodeGatherMerge.h.

+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append.

That just got tweaked in commit 34ca0905.

Looking at the plan I realize that this is happening because wrong costing
for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.

In create_grouping_paths:
+ double total_groups = gmpath->rows *
gmpath->parallel_workers;

This hides a variable of the same name in the enclosing scope. Maybe confusing?

In some other places like create_ordered_paths:
+ double total_groups = path->rows * path->parallel_workers;

Though it probably made sense to use this variable name in
create_grouping_paths, wouldn't total_rows be better here?

It feels weird to be working back to a total row count estimate from
the partial one by simply multiplying by path->parallel_workers.
Gather Merge will underestimate the total rows when parallel_workers <
4, if using partial row estimates ultimately from cost_seqscan which
assume some leader contribution. I don't have a better idea though.
Reversing cost_seqscan's logic certainly doesn't seem right. I don't
know how to make them agree on the leader's contribution AND give
principled answers, since there seems to be some kind of cyclic
dependency in the costing logic (cost_seqscan really needs to be given
a leader contribution estimate from its superpath which knows whether
it will allow the leader to pull tuples greedily/fairly or not, but
that superpath hasn't been created yet; cost_gather_merge needs the
row count from its subpath). Or maybe I'm just confused.

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Thomas Munro (#19)
2 attachment(s)
Re: Gather Merge

On Mon, Nov 14, 2016 at 3:51 PM, Thomas Munro <thomas.munro@enterprisedb.com

wrote:

On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <

thomas.munro@enterprisedb.com>

wrote:

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development

Group

+ * Portions Copyright (c) 1994, Regents of the University of California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?

Fixed.

The year also needs updating to 2016 in nodeGatherMerge.h.

Oops sorry, fixed now.

+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append.

That just got tweaked in commit 34ca0905.

Fixed.

Looking at the plan I realize that this is happening because wrong

costing

for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.

In create_grouping_paths:
+ double total_groups = gmpath->rows *
gmpath->parallel_workers;

This hides a variable of the same name in the enclosing scope. Maybe
confusing?

In some other places like create_ordered_paths:
+ double total_groups = path->rows * path->parallel_workers;

Though it probably made sense to use this variable name in
create_grouping_paths, wouldn't total_rows be better here?

Initially I just copied from the other places. I agree with you that
create_ordered_paths - total_rows make more sense.

It feels weird to be working back to a total row count estimate from
the partial one by simply multiplying by path->parallel_workers.
Gather Merge will underestimate the total rows when parallel_workers <
4, if using partial row estimates ultimately from cost_seqscan which
assume some leader contribution. I don't have a better idea though.
Reversing cost_seqscan's logic certainly doesn't seem right. I don't
know how to make them agree on the leader's contribution AND give
principled answers, since there seems to be some kind of cyclic
dependency in the costing logic (cost_seqscan really needs to be given
a leader contribution estimate from its superpath which knows whether
it will allow the leader to pull tuples greedily/fairly or not, but
that superpath hasn't been created yet; cost_gather_merge needs the
row count from its subpath). Or maybe I'm just confused.

Yes, I agree with you. But we can't really do changes into cost_seqscan.
Another option I can think of is just calculate the rows for gather merge,
by using the reverse formula which been used into cost_seqscan. So
we can completely remote the rows argument from the
create_gather_merge_path()
and then inside create_gather_merge_path() - calculate the total_rows
using same formula which been used into cost_seqscan. This is working
fine - but not quite sure about the approach. So I attached that part of
changes
as separate patch. Any suggestions?

--
Rushabh Lathia
www.EnterpriseDB.com

Attachments:

gather_merge_v4_minor_changes.patchapplication/x-download; name=gather_merge_v4_minor_changes.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..4b6410b
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,723 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *	  routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ *		ExecInitGatherMerge		- initialize the GatherMerge node
+ *		ExecGatherMerge			- retrieve the next tuple from the node
+ *		ExecEndGatherMerge		- shut down the GatherMerge node
+ *		ExecReScanGatherMerge	- rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	int			i;
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond isDone;
+	ExprContext *econtext;
+
+	/*
+	 * Initialize the parallel context and workers on first execution. We do
+	 * this on first execution rather than during node initialization, as it
+	 * needs to allocate large dynamic segment, so it is better to do if it is
+	 * really needed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize the workers required to execute Gather node. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate into gather merge */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * (gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the sort */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+	/*
+	 * First try to read tuple for each worker (including leader) in nowait
+	 * mode, so that we initialize read from each worker as well as leader.
+	 * After this, if all active workers are unable to produce a tuple, then
+	 * re-read and this time use wait mode. For workers that were able to
+	 * produce a tuple in the earlier loop and are still active, just try to
+	 * fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * For gather merge we need to fill the slot from which we returned the earlier
+ * tuple, so this require tuple to be read in wait mode. During initialization
+ * phase, once we try to read the tuple in no-wait mode as we want to
+ * initialize all the readers. Refer gather_merge_init() for more details.
+ *
+ * Return true if found tuple for the reader, otherwise returns false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	HeapTuple	tup = NULL;
+
+	/* We here for leader? */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+	/* Does tuple array have any available tuples? */
+	else if (gm_state->gm_tuple_buffers[reader].nTuples >
+			 gm_state->gm_tuple_buffers[reader].readCounter)
+	{
+		GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	/* reader exhausted? */
+	else if (gm_state->gm_tuple_buffers[reader].done)
+	{
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * try to read more tuple in nowait mode and store it into the tuple
+		 * array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 04e49b7..2f52833 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4356,6 +4381,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 748b687..ac36e48 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3332,6 +3372,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3659,6 +3702,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index e42895d..e1bb6e2 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,75 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simpler then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Consider leader as it always participate into gather merge scan.
+	 * Avoid log(0)...
+	 */
+	N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost. For Gather Merge, require tuple
+	 * to be read in wait mode from each worker, so considering some extra
+	 * cost for the same.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..5fdc1bd 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,10 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+											 GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+									  int nworkers, Plan *subplan);
 
 
 /*
@@ -463,6 +467,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+												(GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -2246,6 +2254,89 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
+/*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	gm_plan = make_gather_merge(subplan->targetlist,
+								NIL,
+								best_path->num_workers,
+								subplan);
+
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	if (pathkeys)
+	{
+		/* Compute sort column info, and adjust GatherMerge tlist as needed */
+		(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+										  best_path->path.parent->relids,
+										  NULL,
+										  true,
+										  &gm_plan->numCols,
+										  &gm_plan->sortColIdx,
+										  &gm_plan->sortOperators,
+										  &gm_plan->collations,
+										  &gm_plan->nullsFirst);
+
+
+		/* Compute sort column info, and adjust subplan's tlist as needed */
+		subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+											 best_path->subpath->parent->relids,
+											 gm_plan->sortColIdx,
+											 false,
+											 &numsortkeys,
+											 &sortColIdx,
+											 &sortOperators,
+											 &collations,
+											 &nullsFirst);
+
+		/*
+		 * Check that we got the same sort key information.  We just Assert
+		 * that the sortops match, since those depend only on the pathkeys;
+		 * but it seems like a good idea to check the sort column numbers
+		 * explicitly, to ensure the tlists really do match up.
+		 */
+		Assert(numsortkeys == gm_plan->numCols);
+		if (memcmp(sortColIdx, gm_plan->sortColIdx,
+				   numsortkeys * sizeof(AttrNumber)) != 0)
+			elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+		Assert(memcmp(sortOperators, gm_plan->sortOperators,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(collations, gm_plan->collations,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+					  numsortkeys * sizeof(bool)) == 0);
+
+		/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+		if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+			subplan = (Plan *) make_sort(subplan, numsortkeys,
+										 sortColIdx, sortOperators,
+										 collations, nullsFirst);
+
+		gm_plan->plan.lefttree = subplan;
+	}
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
 
 /*****************************************************************************
  *
@@ -5909,6 +6000,25 @@ make_gather(List *qptlist,
 	return node;
 }
 
+static GatherMerge *
+make_gather_merge(List *qptlist,
+				  List *qpqual,
+				  int nworkers,
+				  Plan *subplan)
+{
+	GatherMerge	*node = makeNode(GatherMerge);
+	Plan		*plan = &node->plan;
+
+	/* cost should be inserted by caller */
+	plan->targetlist = qptlist;
+	plan->qual = qpqual;
+	plan->lefttree = subplan;
+	plan->righttree = NULL;
+	node->num_workers = nworkers;
+
+	return node;
+}
+
 /*
  * distinctList is a list of SortGroupClauses, identifying the targetlist
  * items that should be considered by the SetOp filter.  The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index d8c5dd3..9628479 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,60 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path. We generate a Gather path based on the cheapest partial path,
+		 * and a GatherMerge path for each partial path that is properly sorted.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) linitial(grouped_rel->partial_pathlist);
 			double		total_groups = path->rows * path->parallel_workers;
 
+			/*
+			 * GatherMerge is always sorted, so if there is GROUP BY clause,
+			 * try to generate GatherMerge path for each partial path.
+			 */
+			if (parse->groupClause)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *gmpath = (Path *) lfirst(lc);
+
+					if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+						continue;
+
+					/* create gather merge path */
+					gmpath = (Path *) create_gather_merge_path(root,
+															   grouped_rel,
+															   gmpath,
+															   NULL,
+															   root->group_pathkeys,
+															   NULL,
+															   &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+												 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								create_group_path(root,
+												  grouped_rel,
+												  gmpath,
+												  target,
+												  parse->groupClause,
+												  (List *) parse->havingQual,
+												  dNumGroups));
+				}
+			}
+
 			path = (Path *) create_gather_path(root,
 											   grouped_rel,
 											   path,
@@ -3870,6 +3916,12 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * Partial pathlist generated for grouped relation are no further useful,
+	 * so just reset it with null.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4166,6 +4218,38 @@ create_distinct_paths(PlannerInfo *root,
 			}
 		}
 
+		/*
+		 * Generate GatherMerge path for each partial path.
+		 */
+		foreach(lc, input_rel->partial_pathlist)
+		{
+			Path	   *path = (Path *) lfirst(lc);
+			double		total_groups = path->rows * path->parallel_workers;
+
+			if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+			{
+				path = (Path *) create_sort_path(root, distinct_rel,
+												 path,
+												 needed_pathkeys,
+												 -1.0);
+			}
+
+			/* create gather merge path */
+			path = (Path *) create_gather_merge_path(root,
+													 distinct_rel,
+													 path,
+													 NULL,
+													 needed_pathkeys,
+													 NULL,
+													 &total_groups);
+			add_path(distinct_rel, (Path *)
+					 create_upper_unique_path(root,
+											  distinct_rel,
+											  path,
+											  list_length(root->distinct_pathkeys),
+											  numDistinctRows));
+		}
+
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
@@ -4310,6 +4394,41 @@ create_ordered_paths(PlannerInfo *root,
 	ordered_rel->useridiscurrent = input_rel->useridiscurrent;
 	ordered_rel->fdwroutine = input_rel->fdwroutine;
 
+	foreach(lc, input_rel->partial_pathlist)
+	{
+		Path	   *path = (Path *) lfirst(lc);
+		bool		is_sorted;
+		double		total_rows = path->rows * path->parallel_workers;
+
+		is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+										  path->pathkeys);
+		if (!is_sorted)
+		{
+			/* An explicit sort here can take advantage of LIMIT */
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+		}
+
+		/* create gather merge path */
+		path = (Path *) create_gather_merge_path(root,
+												 ordered_rel,
+												 path,
+												 target,
+												 root->sort_pathkeys,
+												 NULL,
+												 &total_rows);
+
+		/* Add projection step if needed */
+		if (path->pathtarget != target)
+			path = apply_projection_to_path(root, ordered_rel,
+											path, target);
+
+		add_path(ordered_rel, path);
+	}
+
 	foreach(lc, input_rel->pathlist)
 	{
 		Path	   *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d91bc3b..e9d6279 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..07e1532 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 3c695c1..4e9390e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..bf992cd
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..0c12e27 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;	/* tuple buffer per reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index cb9307c..7edb114 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..e9795f9 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..e986896 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..1df5861 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,11 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
 						  List **restrict_clauses);
 extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
 							Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel, Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 
 #endif   /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
gm_v4_plus_rows_estimate.patchapplication/x-download; name=gm_v4_plus_rows_estimate.patchDownload
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 9628479..93d9ed2 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3752,8 +3752,7 @@ create_grouping_paths(PlannerInfo *root,
 															   gmpath,
 															   NULL,
 															   root->group_pathkeys,
-															   NULL,
-															   &total_groups);
+															   NULL);
 
 					if (parse->hasAggs)
 						add_path(grouped_rel, (Path *)
@@ -4224,7 +4223,6 @@ create_distinct_paths(PlannerInfo *root,
 		foreach(lc, input_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) lfirst(lc);
-			double		total_groups = path->rows * path->parallel_workers;
 
 			if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
 			{
@@ -4240,8 +4238,7 @@ create_distinct_paths(PlannerInfo *root,
 													 path,
 													 NULL,
 													 needed_pathkeys,
-													 NULL,
-													 &total_groups);
+													 NULL);
 			add_path(distinct_rel, (Path *)
 					 create_upper_unique_path(root,
 											  distinct_rel,
@@ -4398,7 +4395,6 @@ create_ordered_paths(PlannerInfo *root,
 	{
 		Path	   *path = (Path *) lfirst(lc);
 		bool		is_sorted;
-		double		total_rows = path->rows * path->parallel_workers;
 
 		is_sorted = pathkeys_contained_in(root->sort_pathkeys,
 										  path->pathkeys);
@@ -4418,8 +4414,7 @@ create_ordered_paths(PlannerInfo *root,
 												 path,
 												 target,
 												 root->sort_pathkeys,
-												 NULL,
-												 &total_rows);
+												 NULL);
 
 		/* Add projection step if needed */
 		if (path->pathtarget != target)
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 07e1532..eb40e24 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1638,11 +1638,14 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 GatherMergePath *
 create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 						 PathTarget *target, List *pathkeys,
-						 Relids required_outer, double *rows)
+						 Relids required_outer)
 {
 	GatherMergePath *pathnode = makeNode(GatherMergePath);
 	Cost			 input_startup_cost = 0;
 	Cost			 input_total_cost = 0;
+	double			 total_rows;
+	double			 parallel_divisor = subpath->parallel_workers;
+	double			 leader_contribution;
 
 	Assert(subpath->parallel_safe);
 	Assert(pathkeys);
@@ -1657,7 +1660,16 @@ create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	pathnode->num_workers = subpath->parallel_workers;
 	pathnode->path.pathkeys = pathkeys;
 	pathnode->path.pathtarget = target ? target : rel->reltarget;
-	pathnode->path.rows += subpath->rows;
+
+	/*
+	 * Calculate the total_rows for gather merge, by considering the leader
+	 * contribution to the execution. This is similar to how cost_seqscan
+	 * estimate the rows for the partial path.
+	 */
+	leader_contribution = 1.0 - (0.3 * subpath->parallel_workers);
+	if (leader_contribution > 0)
+		parallel_divisor += leader_contribution;
+	total_rows = clamp_row_est(subpath->rows * parallel_divisor);
 
 	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
 	{
@@ -1684,7 +1696,7 @@ create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 	}
 
 	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
-					  input_startup_cost, input_total_cost, rows);
+					  input_startup_cost, input_total_cost, &total_rows);
 
 	return pathnode;
 }
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 1df5861..3dbe9fc 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -271,7 +271,6 @@ extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
 												 RelOptInfo *rel, Path *subpath,
 												 PathTarget *target,
 												 List *pathkeys,
-												 Relids required_outer,
-												 double *rows);
+												 Relids required_outer);
 
 #endif   /* PATHNODE_H */
#21Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#20)
1 attachment(s)
Re: Gather Merge

On Wed, Nov 16, 2016 at 3:10 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

On Mon, Nov 14, 2016 at 3:51 PM, Thomas Munro <
thomas.munro@enterprisedb.com> wrote:

On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <

thomas.munro@enterprisedb.com>

wrote:

+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development

Group

+ * Portions Copyright (c) 1994, Regents of the University of

California

Shouldn't this say just "(c) 2016, PostgreSQL Global Development
Group"?

Fixed.

The year also needs updating to 2016 in nodeGatherMerge.h.

Oops sorry, fixed now.

+ /* Per-tuple heap maintenance cost */
+ run_cost += path->path.rows * comparison_cost * 2.0 * logN;

Why multiply by two? The comment above this code says "about log2(N)
comparisons to delete the top heap entry and another log2(N)
comparisons to insert its successor". In fact gather_merge_getnext
calls binaryheap_replace_first, which replaces the top element without
any comparisons at all and then performs a sift-down in log2(N)
comparisons to find its new position. There is no per-tuple "delete"
involved. We "replace" the top element with the value it already had,
just to trigger the sift-down, because we know that our comparator
function might have a new opinion of the sort order of this element.
Very clever! The comment and the 2.0 factor in cost_gather_merge seem
to be wrong though -- or am I misreading the code?

See cost_merge_append.

That just got tweaked in commit 34ca0905.

Fixed.

Looking at the plan I realize that this is happening because wrong

costing

for Gather Merge. Here in the plan we can see the row estimated by
Gather Merge is wrong. This is because earlier patch GM was considering
rows = subpath->rows, which is not true as subpath is partial path. So
we need to multiple it with number of worker. Attached patch also fixed
this issues. I also run the TPC-H benchmark with the patch and results
are same as earlier.

In create_grouping_paths:
+ double total_groups = gmpath->rows *
gmpath->parallel_workers;

This hides a variable of the same name in the enclosing scope. Maybe
confusing?

In some other places like create_ordered_paths:
+ double total_groups = path->rows * path->parallel_workers;

Though it probably made sense to use this variable name in
create_grouping_paths, wouldn't total_rows be better here?

Initially I just copied from the other places. I agree with you that
create_ordered_paths - total_rows make more sense.

It feels weird to be working back to a total row count estimate from
the partial one by simply multiplying by path->parallel_workers.
Gather Merge will underestimate the total rows when parallel_workers <
4, if using partial row estimates ultimately from cost_seqscan which
assume some leader contribution. I don't have a better idea though.
Reversing cost_seqscan's logic certainly doesn't seem right. I don't
know how to make them agree on the leader's contribution AND give
principled answers, since there seems to be some kind of cyclic
dependency in the costing logic (cost_seqscan really needs to be given
a leader contribution estimate from its superpath which knows whether
it will allow the leader to pull tuples greedily/fairly or not, but
that superpath hasn't been created yet; cost_gather_merge needs the
row count from its subpath). Or maybe I'm just confused.

Yes, I agree with you. But we can't really do changes into cost_seqscan.
Another option I can think of is just calculate the rows for gather merge,
by using the reverse formula which been used into cost_seqscan. So
we can completely remote the rows argument from the
create_gather_merge_path()
and then inside create_gather_merge_path() - calculate the total_rows
using same formula which been used into cost_seqscan. This is working
fine - but not quite sure about the approach. So I attached that part of
changes
as separate patch. Any suggestions?

With offline discussion with Thomas, I realized that this won't work. It
will
work only if the subplan is seqscan - so this logic won't be enough to
estimate the rows. I guess as Thomas told earlier, this is not problem
with GatherMerge implementation as such - so we will keep this as separate.

Apart from this my colleague Neha Sharma, reported the server crash with
the patch.
It was hitting below Assert into create_gather_merge_path().

Assert(pathkeys);

Basically when query is something like "select * from foo where a = 1 order
by a;"
here query has sortclause but planner won't generate sort key because
where equality clause is on same column. Fix is about making sure of
pathkeys
before calling create_gather_merge_path().

PFA latest patch with fix as well as few cosmetic changes.

--
Rushabh Lathia
www.EnterpriseDB.com

--
Rushabh Lathia

Attachments:

gather_merge_v5.patchbinary/octet-stream; name=gather_merge_v5.patchDownload
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a669d9..73cfe28 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 554244f..45b36af 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..7e77fc2
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,723 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *	  routines to handle GatherMerge nodes.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+/* INTERFACE ROUTINES
+ *		ExecInitGatherMerge		- initialize the GatherMerge node
+ *		ExecGatherMerge			- retrieve the next tuple from the node
+ *		ExecEndGatherMerge		- shut down the GatherMerge node
+ *		ExecReScanGatherMerge	- rescan the GatherMerge node
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * Tuple array size. With various performance testing observed that performance
+ * benefit with array size > 10 is not worth the memory consumption by the tuple
+ * array size.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys = palloc0(sizeof(SortSupportData) * node->numCols);
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond	isDone;
+	ExprContext	   *econtext;
+	int				i;
+
+	/*
+	 * Initialize the parallel context and workers on first execution. We do
+	 * this on first execution rather than during node initialization, as it
+	 * needs to allocate large dynamic segment, so it is better to do if it is
+	 * really needed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize the workers required to execute Gather node. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate into gather merge */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int		nreaders = gm_state->nreaders;
+	bool	initialize = true;
+	int		i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * (gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the sort */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, heap_compare_slots, gm_state);
+
+	/*
+	 * First try to read tuple for each worker (including leader) in nowait
+	 * mode, so that we initialize read from each worker as well as leader.
+	 * After this, if all active workers are unable to produce a tuple, then
+	 * re-read and this time use wait mode. For workers that were able to
+	 * produce a tuple in the earlier loop and are still active, just try to
+	 * fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int		i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int		i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Attempt to read tuple for the given reader and store it into reader
+ * tuple slot.
+ *
+ * If the worker tuple array contains any tuple then just read tuple from the
+ * tuple array. Other wise read the tuple from the queue and also attempt to
+ * form the tuple array.
+ *
+ * For gather merge we need to fill the slot from which we returned the earlier
+ * tuple, so this require tuple to be read in wait mode. During initialization
+ * phase, once we try to read the tuple in no-wait mode as we want to
+ * initialize all the readers. Refer gather_merge_init() for more details.
+ *
+ * Return true if found tuple for the reader, otherwise returns false.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	HeapTuple tup = NULL;
+
+	/* We here for leader? */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+	/* Does tuple array have any available tuples? */
+	else if (gm_state->gm_tuple_buffers[reader].nTuples >
+			 gm_state->gm_tuple_buffers[reader].readCounter)
+	{
+		GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	/* reader exhausted? */
+	else if (gm_state->gm_tuple_buffers[reader].done)
+	{
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * try to read more tuple in nowait mode and store it into the tuple
+		 * array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done)
+{
+	TupleQueueReader   *reader;
+	HeapTuple			tup = NULL;
+	MemoryContext		oldContext;
+	MemoryContext		tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 04e49b7..2f52833 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4356,6 +4381,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 748b687..ac36e48 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3332,6 +3372,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3659,6 +3702,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 917e6c8..77a452e 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2075,6 +2075,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2477,6 +2497,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index e42895d..9c1e578 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,74 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to delete
+ * the top heap entry and another log2(N) comparisons to insert its successor
+ * from the same stream.
+ *
+ * The heap is never spilled to disk, since we assume N is not very large. So
+ * this is much simpler then cost_sort.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost	startup_cost = 0;
+	Cost	run_cost = 0;
+	Cost	comparison_cost;
+	double	N;
+	double	logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Consider leader as it always participate into gather merge scan.
+	 * Avoid log(0)...
+	 */
+	N = (path->num_workers < 2) ? 2.0 : (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * 2.0 * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost. For Gather Merge, require tuple
+	 * to be read in wait mode from each worker, so considering some extra
+	 * cost for the same.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index ad49674..5fdc1bd 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,10 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+											 GatherMergePath *best_path);
+static GatherMerge *make_gather_merge(List *qptlist, List *qpqual,
+									  int nworkers, Plan *subplan);
 
 
 /*
@@ -463,6 +467,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+												(GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -2246,6 +2254,89 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
+/*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	gm_plan = make_gather_merge(subplan->targetlist,
+								NIL,
+								best_path->num_workers,
+								subplan);
+
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	if (pathkeys)
+	{
+		/* Compute sort column info, and adjust GatherMerge tlist as needed */
+		(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+										  best_path->path.parent->relids,
+										  NULL,
+										  true,
+										  &gm_plan->numCols,
+										  &gm_plan->sortColIdx,
+										  &gm_plan->sortOperators,
+										  &gm_plan->collations,
+										  &gm_plan->nullsFirst);
+
+
+		/* Compute sort column info, and adjust subplan's tlist as needed */
+		subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+											 best_path->subpath->parent->relids,
+											 gm_plan->sortColIdx,
+											 false,
+											 &numsortkeys,
+											 &sortColIdx,
+											 &sortOperators,
+											 &collations,
+											 &nullsFirst);
+
+		/*
+		 * Check that we got the same sort key information.  We just Assert
+		 * that the sortops match, since those depend only on the pathkeys;
+		 * but it seems like a good idea to check the sort column numbers
+		 * explicitly, to ensure the tlists really do match up.
+		 */
+		Assert(numsortkeys == gm_plan->numCols);
+		if (memcmp(sortColIdx, gm_plan->sortColIdx,
+				   numsortkeys * sizeof(AttrNumber)) != 0)
+			elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+		Assert(memcmp(sortOperators, gm_plan->sortOperators,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(collations, gm_plan->collations,
+					  numsortkeys * sizeof(Oid)) == 0);
+		Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+					  numsortkeys * sizeof(bool)) == 0);
+
+		/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+		if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+			subplan = (Plan *) make_sort(subplan, numsortkeys,
+										 sortColIdx, sortOperators,
+										 collations, nullsFirst);
+
+		gm_plan->plan.lefttree = subplan;
+	}
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
 
 /*****************************************************************************
  *
@@ -5909,6 +6000,25 @@ make_gather(List *qptlist,
 	return node;
 }
 
+static GatherMerge *
+make_gather_merge(List *qptlist,
+				  List *qpqual,
+				  int nworkers,
+				  Plan *subplan)
+{
+	GatherMerge	*node = makeNode(GatherMerge);
+	Plan		*plan = &node->plan;
+
+	/* cost should be inserted by caller */
+	plan->targetlist = qptlist;
+	plan->qual = qpqual;
+	plan->lefttree = subplan;
+	plan->righttree = NULL;
+	node->num_workers = nworkers;
+
+	return node;
+}
+
 /*
  * distinctList is a list of SortGroupClauses, identifying the targetlist
  * items that should be considered by the SetOp filter.  The input path must
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index a8847de..3c5ca3b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3725,14 +3725,61 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path. We generate a Gather path based on the cheapest partial path,
+		 * and a GatherMerge path for each partial path that is properly sorted.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
 			Path	   *path = (Path *) linitial(grouped_rel->partial_pathlist);
 			double		total_groups = path->rows * path->parallel_workers;
 
+			/*
+			 * GatherMerge is always sorted, so if there is GROUP BY clause,
+			 * try to generate GatherMerge path for each partial path.
+			 */
+			if (parse->groupClause && root->group_pathkeys)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *gmpath = (Path *) lfirst(lc);
+					double		total_groups = gmpath->rows * gmpath->parallel_workers;
+
+					if (!pathkeys_contained_in(root->group_pathkeys, gmpath->pathkeys))
+						continue;
+
+					/* create gather merge path */
+					gmpath = (Path *) create_gather_merge_path(root,
+															   grouped_rel,
+															   gmpath,
+															   NULL,
+															   root->group_pathkeys,
+															   NULL,
+															   &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+												 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								create_group_path(root,
+												  grouped_rel,
+												  gmpath,
+												  target,
+												  parse->groupClause,
+												  (List *) parse->havingQual,
+												  dNumGroups));
+				}
+			}
+
 			path = (Path *) create_gather_path(root,
 											   grouped_rel,
 											   path,
@@ -3870,6 +3917,12 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * Partial pathlist generated for grouped relation are no further useful,
+	 * so just reset it with null.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4166,6 +4219,42 @@ create_distinct_paths(PlannerInfo *root,
 			}
 		}
 
+		/*
+		 * Generate GatherMerge path for each partial path.
+		 */
+		if (needed_pathkeys)
+		{
+			foreach(lc, input_rel->partial_pathlist)
+			{
+				Path	   *path = (Path *) lfirst(lc);
+				double		total_groups = path->rows * path->parallel_workers;
+
+				if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
+				{
+					path = (Path *) create_sort_path(root,
+													 distinct_rel,
+													 path,
+													 needed_pathkeys,
+													 -1.0);
+				}
+
+				/* create gather merge path */
+				path = (Path *) create_gather_merge_path(root,
+														 distinct_rel,
+														 path,
+														 NULL,
+														 needed_pathkeys,
+														 NULL,
+														 &total_groups);
+				add_path(distinct_rel, (Path *)
+						 create_upper_unique_path(root,
+												  distinct_rel,
+												  path,
+												  list_length(root->distinct_pathkeys),
+												  numDistinctRows));
+			}
+		}
+
 		/* For explicit-sort case, always use the more rigorous clause */
 		if (list_length(root->distinct_pathkeys) <
 			list_length(root->sort_pathkeys))
@@ -4180,15 +4269,17 @@ create_distinct_paths(PlannerInfo *root,
 
 		path = cheapest_input_path;
 		if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys))
-			path = (Path *) create_sort_path(root, distinct_rel,
+			path = (Path *) create_sort_path(root,
+											 distinct_rel,
 											 path,
 											 needed_pathkeys,
 											 -1.0);
 
 		add_path(distinct_rel, (Path *)
-				 create_upper_unique_path(root, distinct_rel,
+				 create_upper_unique_path(root,
+										  distinct_rel,
 										  path,
-										list_length(root->distinct_pathkeys),
+										  list_length(root->distinct_pathkeys),
 										  numDistinctRows));
 	}
 
@@ -4310,6 +4401,45 @@ create_ordered_paths(PlannerInfo *root,
 	ordered_rel->useridiscurrent = input_rel->useridiscurrent;
 	ordered_rel->fdwroutine = input_rel->fdwroutine;
 
+	/* sort_pathkeys present? - try to generate the gather merge path */
+	if (root->sort_pathkeys)
+	{
+		foreach(lc, input_rel->partial_pathlist)
+		{
+			Path	   *path = (Path *) lfirst(lc);
+			bool		is_sorted;
+			double		total_groups = path->rows * path->parallel_workers;
+
+			is_sorted = pathkeys_contained_in(root->sort_pathkeys,
+											  path->pathkeys);
+			if (!is_sorted)
+			{
+				/* An explicit sort here can take advantage of LIMIT */
+				path = (Path *) create_sort_path(root,
+												 ordered_rel,
+												 path,
+												 root->sort_pathkeys,
+												 limit_tuples);
+			}
+
+			/* create gather merge path */
+			path = (Path *) create_gather_merge_path(root,
+													 ordered_rel,
+													 path,
+													 target,
+													 root->sort_pathkeys,
+													 NULL,
+													 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
 	foreach(lc, input_rel->pathlist)
 	{
 		Path	   *path = (Path *) lfirst(lc);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d91bc3b..e9d6279 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -605,6 +605,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 263ba45..760f519 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2682,6 +2682,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 6d3ccfd..b4a49d8 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index da74f00..8937032 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -894,6 +894,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..58dcebf
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..0c12e27 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1969,6 +1969,33 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan in those workers, collect the results and perform sort.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;	/* tuple buffer per reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index cb9307c..7edb114 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index e2fbc7d..ec319bf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 3a1255a..e9795f9 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2a4df2f..e986896 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..1df5861 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -267,5 +267,11 @@ extern ParamPathInfo *get_joinrel_parampathinfo(PlannerInfo *root,
 						  List **restrict_clauses);
 extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel,
 							Relids required_outer);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel, Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 
 #endif   /* PATHNODE_H */
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6c6d519..a6c4a5f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -770,6 +770,8 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#22Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Rushabh Lathia (#21)
Re: Gather Merge

On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

PFA latest patch with fix as well as few cosmetic changes.

Moved to next CF with "needs review" status.

Regards,
Hari Babu
Fujitsu Australia

#23Robert Haas
robertmhaas@gmail.com
In reply to: Haribabu Kommi (#22)
1 attachment(s)
Re: Gather Merge

On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:

On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

PFA latest patch with fix as well as few cosmetic changes.

Moved to next CF with "needs review" status.

I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:

* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.

Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtFySXN=jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.com

With that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:

Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms

I haven't really had time to dig into these results yet, so I'm not
sure how "real" these numbers are and how much is run-to-run jitter,
EXPLAIN ANALYZE distortion, or whatever. I think this overall concept
is good, because there should be cases where it's substantially
cheaper to preserve the order while gathering tuples from workers than
to re-sort afterwards. But this particular set of results is a bit
lackluster.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

gather-merge-v6.patchapplication/x-download; name=gather-merge-v6.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 30dd54c..48d95cd 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3455,6 +3455,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+      <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>enable_gathermerge</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Enables or disables the query planner's use of gather
+        merge plan types. The default is <literal>on</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
       <term><varname>enable_hashagg</varname> (<type>boolean</type>)
       <indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c762fb0..3beb79e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -881,6 +881,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1370,6 +1373,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c..7e2f4e2 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -19,7 +19,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapAnd.o nodeBitmapOr.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o nodeCustom.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index b8edd36..98baaf3 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -101,6 +101,7 @@
 #include "executor/nodeModifyTable.h"
 #include "executor/nodeNestloop.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeRecursiveunion.h"
 #include "executor/nodeResult.h"
 #include "executor/nodeSamplescan.h"
@@ -314,6 +315,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -515,6 +521,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -673,6 +683,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -806,6 +820,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..00b74b9
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,718 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *		Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+				  bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+					  bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	gm_state->ps.ps_TupFromTlist = false;
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys =
+			palloc0(sizeof(SortSupportData) * node->numCols);
+
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	TupleTableSlot *resultSlot;
+	ExprDoneCond isDone;
+	ExprContext *econtext;
+	int			i;
+
+	/*
+	 * As with Gather, we don't launch workers until this node is actually
+	 * executed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize data structures for workers. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/* Try to launch workers. */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader = palloc(pcxt->nworkers_launched *
+									  sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Check to see if we're still projecting out tuples from a previous scan
+	 * tuple (because there is a function-returning-set in the projection
+	 * expressions).  If so, try to project another one.
+	 */
+	if (node->ps.ps_TupFromTlist)
+	{
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+		if (isDone == ExprMultipleResult)
+			return resultSlot;
+		/* Done with that source tuple... */
+		node->ps.ps_TupFromTlist = false;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.  Note we can't do this
+	 * until we're done projecting.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/* Get and return the next tuple, projecting if necessary. */
+	for (;;)
+	{
+		/*
+		 * Get next tuple, either from one of our workers, or by running the
+		 * plan ourselves.
+		 */
+		slot = gather_merge_getnext(node);
+		if (TupIsNull(slot))
+			return NULL;
+
+		/*
+		 * form the result tuple using ExecProject(), and return it --- unless
+		 * the projection produces an empty set, in which case we must loop
+		 * back around for another tuple
+		 */
+		econtext->ecxt_outertuple = slot;
+		resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
+
+		if (isDone != ExprEndResult)
+		{
+			node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
+			return resultSlot;
+		}
+	}
+
+	return slot;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+										(gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the merge */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+											heap_compare_slots,
+											gm_state);
+
+	/*
+	 * First, try to read a tuple from each worker (including leader) in
+	 * nowait mode, so that we initialize read from each worker as well as
+	 * leader. After this, if all active workers are unable to produce a
+	 * tuple, then re-read and this time use wait mode. For workers that were
+	 * able to produce a tuple in the earlier loop and are still active, just
+	 * try to fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	GMReaderTupleBuffer *tuple_buffer;
+	HeapTuple	tup = NULL;
+
+	/*
+	 * If we're being asked to generate a tuple from the leader, then we
+	 * just call ExecProcNode as normal to produce one.
+	 */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+
+	/* Otherwise, check the state of the relevant tuple buffer. */
+	tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+	if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+	{
+		/* Return any tuple previously read that is still buffered. */
+		tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	else if (tuple_buffer->done)
+	{
+		/* Reader is known to be exhausted. */
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		/* Read and buffer next tuple. */
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * Attempt to read more tuples in nowait mode and store them in
+		 * the tuple array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+				  bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext;
+
+	tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 930f2f1..bd009c4 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -341,6 +341,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4421,6 +4446,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 806d0a9..c648bed 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -447,6 +447,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1964,6 +1993,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3377,6 +3417,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3704,6 +3747,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dc40d01..20797f0 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2077,6 +2077,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2509,6 +2529,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 46d7d06..d909ba4 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -1999,39 +1999,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
 
 /*
  * generate_gather_paths
- *		Generate parallel access paths for a relation by pushing a Gather on
- *		top of a partial path.
+ *		Generate parallel access paths for a relation by pushing a Gather or
+ *		Gather Merge on top of a partial path.
  *
  * This must not be called until after we're done creating all partial paths
  * for the specified relation.  (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
  */
 void
 generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
+	ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
 		return;
 
 	/*
-	 * The output of Gather is currently always unsorted, so there's only one
-	 * partial path of interest: the cheapest one.  That will be the one at
-	 * the front of partial_pathlist because of the way add_partial_path
-	 * works.
-	 *
-	 * Eventually, we should have a Gather Merge operation that can merge
-	 * multiple tuple streams together while preserving their ordering.  We
-	 * could usefully generate such a path from each partial path that has
-	 * non-NIL pathkeys.
+	 * The output of Gather is always unsorted, so there's only one partial
+	 * path of interest: the cheapest one.  That will be the one at the front
+	 * of partial_pathlist because of the way add_partial_path works.
 	 */
 	cheapest_partial_path = linitial(rel->partial_pathlist);
 	simple_gather_path = (Path *)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
+
+	/*
+	 * For each useful ordering, we can consider an order-preserving Gather
+	 * Merge.
+	 */
+	foreach (lc, rel->partial_pathlist)
+	{
+		Path   *subpath = (Path *) lfirst(lc);
+		GatherMergePath   *path;
+
+		if (subpath->pathkeys == NIL)
+			continue;
+
+		path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+										subpath->pathkeys, NULL, NULL);
+		add_path(rel, &path->path);
+	}
 }
 
 /*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index a52eb7e..dfc3b78 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -391,6 +392,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Add one to the number of workers to account for the leader.  This might
+	 * be overgenerous since the leader will do less work than other workers
+	 * in typical cases, but we'll go with it for now.
+	 */
+	Assert(path->num_workers > 0);
+	N = (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost.  Since Gather Merge, unlike
+	 * Gather, requires us to block until a tuple is available from every
+	 * worker, we bump the IPC cost up a little bit as compared with Gather.
+	 * For lack of a better idea, charge an extra 5%.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index c7bcd9b..5dec091 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -270,6 +270,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+						 GatherMergePath *best_path);
 
 
 /*
@@ -463,6 +465,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+											  (GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -1408,6 +1414,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
 }
 
 /*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather Merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	/* As with Gather, it's best to project away columns in the workers. */
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	/* See create_merge_append_plan for why there's no make_xxx function */
+	gm_plan = makeNode(GatherMerge);
+	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->num_workers = best_path->num_workers;
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
+	Assert(pathkeys != NIL);
+
+	/* Compute sort column info, and adjust GatherMerge tlist as needed */
+	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+									  best_path->path.parent->relids,
+									  NULL,
+									  true,
+									  &gm_plan->numCols,
+									  &gm_plan->sortColIdx,
+									  &gm_plan->sortOperators,
+									  &gm_plan->collations,
+									  &gm_plan->nullsFirst);
+
+
+	/* Compute sort column info, and adjust subplan's tlist as needed */
+	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+										 best_path->subpath->parent->relids,
+										 gm_plan->sortColIdx,
+										 false,
+										 &numsortkeys,
+										 &sortColIdx,
+										 &sortOperators,
+										 &collations,
+										 &nullsFirst);
+
+	/* As for MergeAppend, check that we got the same sort key information. */
+	Assert(numsortkeys == gm_plan->numCols);
+	if (memcmp(sortColIdx, gm_plan->sortColIdx,
+			   numsortkeys * sizeof(AttrNumber)) != 0)
+		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+	Assert(memcmp(sortOperators, gm_plan->sortOperators,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(collations, gm_plan->collations,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+				  numsortkeys * sizeof(bool)) == 0);
+
+	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+		subplan = (Plan *) make_sort(subplan, numsortkeys,
+									 sortColIdx, sortOperators,
+									 collations, nullsFirst);
+
+	/* Now insert the subplan under GatherMerge. */
+	gm_plan->plan.lefttree = subplan;
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
+
+/*
  * create_projection_plan
  *
  *	  Create a plan tree to do a projection step and (recursively) plans
@@ -2246,7 +2332,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
-
 /*****************************************************************************
  *
  *	BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 207290f..fdcee75 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3722,8 +3722,7 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path.  We can do this using either Gather or Gather Merge.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
@@ -3770,6 +3769,70 @@ create_grouping_paths(PlannerInfo *root,
 										   parse->groupClause,
 										   (List *) parse->havingQual,
 										   dNumGroups));
+
+			/*
+			 * The point of using Gather Merge rather than Gather is that it
+			 * can preserve the ordering of the input path, so there's no
+			 * reason to try it unless (1) it's possible to produce more than
+			 * one output row and (2) we want the output path to be ordered.
+			 */
+			if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *subpath = (Path *) lfirst(lc);
+					Path	   *gmpath;
+					double		total_groups;
+
+					/*
+					 * It's useful to consider paths that are already properly
+					 * ordered for Gather Merge, because those don't need a
+					 * sort.  It's also useful to consider the cheapest path,
+					 * because sorting it in parallel and then doing Gather
+					 * Merge may be better than doing an unordered Gather
+					 * followed by a sort.  But there's no point in
+					 * considering non-cheapest paths that aren't already
+					 * sorted correctly.
+					 */
+					if (path != subpath &&
+						!pathkeys_contained_in(root->group_pathkeys,
+											   subpath->pathkeys))
+						continue;
+
+					total_groups = subpath->rows * subpath->parallel_workers;
+
+					gmpath = (Path *)
+						create_gather_merge_path(root,
+												 grouped_rel,
+												 subpath,
+												 NULL,
+												 root->group_pathkeys,
+												 NULL,
+												 &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+								 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								 create_group_path(root,
+												   grouped_rel,
+												   gmpath,
+												   target,
+												   parse->groupClause,
+												   (List *) parse->havingQual,
+												   dNumGroups));
+				}
+			}
 		}
 	}
 
@@ -3867,6 +3930,16 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * We've been using the partial pathlist for the grouped relation to hold
+	 * partially aggregated paths, but that's actually a little bit bogus
+	 * because it's unsafe for later planning stages -- like ordered_rel ---
+	 * to get the idea that they can use these partial paths as if they didn't
+	 * need a FinalizeAggregate step.  Zap the partial pathlist at this stage
+	 * so we don't get confused.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4336,6 +4409,50 @@ create_ordered_paths(PlannerInfo *root,
 	}
 
 	/*
+	 * generate_gather_paths() will have already generated a simple Gather
+	 * path for the best parallel path, if any, and the loop above will have
+	 * considered sorting it.  Similarly, generate_gather_paths() will also
+	 * have generated order-preserving Gather Merge plans which can be used
+	 * without sorting if they happen to match the sort_pathkeys, and the loop
+	 * above will have handled those as well.  However, there's one more
+	 * possibility: it may make sense to sort the cheapest partial path
+	 * according to the required output order and then use Gather Merge.
+	 */
+	if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+		input_rel->partial_pathlist != NIL)
+	{
+		Path	   *cheapest_partial_path;
+
+		cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+		/*
+		 * If cheapest partial path doesn't need a sort, this is redundant
+		 * with what's already been tried.
+		 */
+		if (!pathkeys_contained_in(root->sort_pathkeys,
+								   cheapest_partial_path->pathkeys))
+		{
+			Path	   *path;
+			double		total_groups;
+
+			total_groups = cheapest_partial_path->rows *
+				cheapest_partial_path->parallel_workers;
+			path = (Path *)
+				create_gather_merge_path(root, ordered_rel,
+										 cheapest_partial_path,
+										 target, root->sort_pathkeys, NULL,
+										 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
+	/*
 	 * If there is an FDW that's responsible for all baserels of the query,
 	 * let it consider adding ForeignPaths.
 	 */
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 413a0d9..0e15fbf 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index aad0b68..76aee75 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2685,6 +2685,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			break;
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 3b7c56d..27b5e52 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5b23dbf..9d8b8b0 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -893,6 +893,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ce13bf7..7c2e0c2 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1997,6 +1997,35 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan which produces sorted output in each worker, and then
+ *		merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;		/* tuple buffer per
+														 * reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index a1bb0ac..3df7603 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -75,6 +75,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -123,6 +124,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -244,6 +246,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_SortPath,
 	T_GroupPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 692a626..0022a5b 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -765,6 +765,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index e1d31c7..ea0ed32 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1194,6 +1194,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 39376ec..7ceb4ca 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index d16f879..63d2de9 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -76,6 +76,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
 extern GatherPath *create_gather_path(PlannerInfo *root,
 				   RelOptInfo *rel, Path *subpath, PathTarget *target,
 				   Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel,
+												 Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
 						 RelOptInfo *rel, Path *subpath,
 						 List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..5c547e2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 993880d..5633386 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -777,6 +777,9 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#24Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Robert Haas (#23)
Re: Gather Merge

On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <

rushabh.lathia@gmail.com>

wrote:

PFA latest patch with fix as well as few cosmetic changes.

Moved to next CF with "needs review" status.

I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:

* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.

Thanks Robert for hacking into this.

Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtFySXN=
jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.com

With that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:

Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms

This is strange, I will re-run the test again and post the results soon.

I haven't really had time to dig into these results yet, so I'm not
sure how "real" these numbers are and how much is run-to-run jitter,
EXPLAIN ANALYZE distortion, or whatever. I think this overall concept
is good, because there should be cases where it's substantially
cheaper to preserve the order while gathering tuples from workers than
to re-sort afterwards. But this particular set of results is a bit
lackluster.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Rushabh Lathia

#25Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#24)
Re: Gather Merge

On Fri, Jan 13, 2017 at 10:52 AM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas@gmail.com>
wrote:

On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <

rushabh.lathia@gmail.com>

wrote:

PFA latest patch with fix as well as few cosmetic changes.

Moved to next CF with "needs review" status.

I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:

* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.

Thanks Robert for hacking into this.

Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtF
ySXN=jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.com

With that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:

Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms

This is strange, I will re-run the test again and post the results soon.

Here is the benchmark number which I got with the latest (v6) patch:

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.conf

Timing with v6 patch on REL9_6_STABLE branch
(last commit: 8a70d8ae7501141d283e56b31e10c66697c986d5).

Query 3: 49888.300 -> 45914.426
Query 4: 8531.939 -> 7790.498
Query 5: 40668.920 -> 38403.658
Query 9: 90922.825 -> 50277.646
Query 10: 45776.445 -> 39189.086
Query 12: 21644.593 -> 21180.402
Query 15: 63889.061 -> 62027.351
Query 17: 142208.463 -> 118704.424
Query 18: 244051.155 -> 186498.456
Query 20: 212046.605 -> 159360.520

Timing with v6 patch on master branch:
(last commit: 0777f7a2e8e0a51f0f60cfe164d538bb459bf9f2)

Query 3: 45261.722 -> 43499.739
Query 4: 7444.630 -> 6363.999
Query 5: 37146.458 -> 37081.952
Query 9: 88874.243 -> 50232.088
Query 10: 43583.133 -> 38118.195
Query 12: 19918.149 -> 20414.114
Query 15: 62554.860 -> 61039.048
Query 17: 131369.235 -> 111587.287
Query 18: 246162.686 -> 195434.292
Query 20: 201221.952 -> 169093.834

Looking at this results it seems like patch is good to go ahead.
I did notice that with your tpch run, query 9, 18. 20 were unable to pick
gather merge plan and that are the queries which are the most benefited
with gather merge. On another observation, if the work_mem set to high
then some queries end up picking Hash Aggregate - even though gather
merge performs better (I manually tested that by forcing gather merge).
I am still looking into this issue.

Thanks,

--
Rushabh Lathia
www.EnterpriseDB.com

#26Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#25)
Re: Gather Merge

On Tue, Jan 17, 2017 at 5:19 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

On Fri, Jan 13, 2017 at 10:52 AM, Rushabh Lathia <rushabh.lathia@gmail.com

wrote:

On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas@gmail.com>
wrote:

On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <

rushabh.lathia@gmail.com>

wrote:

PFA latest patch with fix as well as few cosmetic changes.

Moved to next CF with "needs review" status.

I spent quite a bit of time on this patch over the last couple of
days. I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way. Meanwhile,
here's an updated version with the following changes:

* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905. This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't. Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files. One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan. Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths. In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths(). I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments. Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.

Thanks Robert for hacking into this.

Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster. Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
/messages/by-id/CA+TgmoYt2pyk2CTyvYCtF
ySXN=jsorGh8_MJTTLoWU5qkJOkYQ@mail.gmail.com

With that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster. Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:

Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms

This is strange, I will re-run the test again and post the results soon.

Here is the benchmark number which I got with the latest (v6) patch:

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.conf

Timing with v6 patch on REL9_6_STABLE branch
(last commit: 8a70d8ae7501141d283e56b31e10c66697c986d5).

Query 3: 49888.300 -> 45914.426
Query 4: 8531.939 -> 7790.498
Query 5: 40668.920 -> 38403.658
Query 9: 90922.825 -> 50277.646
Query 10: 45776.445 -> 39189.086
Query 12: 21644.593 -> 21180.402
Query 15: 63889.061 -> 62027.351
Query 17: 142208.463 -> 118704.424
Query 18: 244051.155 -> 186498.456
Query 20: 212046.605 -> 159360.520

Timing with v6 patch on master branch:
(last commit: 0777f7a2e8e0a51f0f60cfe164d538bb459bf9f2)

Query 3: 45261.722 -> 43499.739
Query 4: 7444.630 -> 6363.999
Query 5: 37146.458 -> 37081.952
Query 9: 88874.243 -> 50232.088
Query 10: 43583.133 -> 38118.195
Query 12: 19918.149 -> 20414.114
Query 15: 62554.860 -> 61039.048
Query 17: 131369.235 -> 111587.287
Query 18: 246162.686 -> 195434.292
Query 20: 201221.952 -> 169093.834

Looking at this results it seems like patch is good to go ahead.
I did notice that with your tpch run, query 9, 18. 20 were unable to pick
gather merge plan and that are the queries which are the most benefited
with gather merge. On another observation, if the work_mem set to high
then some queries end up picking Hash Aggregate - even though gather
merge performs better (I manually tested that by forcing gather merge).
I am still looking into this issue.

I am able to reproduce the issue with smaller case, where gather merge
is not getting pick against hash aggregate.

Consider the following cases:

Testcase setup:

1) ./db/bin/pgbench postgres -i -F 100 -s 20

2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;

Example:

postgres=# show shared_buffers ;
shared_buffers
----------------
1GB
(1 row)

postgres=# show work_mem ;
work_mem
----------
64MB
(1 row)

1) Case 1:

postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
------------------------------------------------------------
----------------------------------------------------------------------
HashAggregate (cost=62081.49..64108.32 rows=202683 width=12) (actual
time=1017.802..1079.324 rows=200000 loops=1)
Group Key: aid
-> Seq Scan on pgbench_accounts (cost=0.00..61068.07 rows=202683
width=8) (actual time=738.439..803.310 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.189 ms
Execution time: 1094.933 ms
(7 rows)

2) Case 2:

postgres=# set enable_hashagg = off;
SET
postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;
QUERY
PLAN
------------------------------------------------------------
----------------------------------------------------------------------------
GroupAggregate (cost=78933.43..82480.38 rows=202683 width=12) (actual
time=980.983..1097.461 rows=200000 loops=1)
Group Key: aid
-> Sort (cost=78933.43..79440.14 rows=202683 width=8) (actual
time=980.975..1006.891 rows=200000 loops=1)
Sort Key: aid
Sort Method: quicksort Memory: 17082kB
-> Seq Scan on pgbench_accounts (cost=0.00..61068.07 rows=202683
width=8) (actual time=797.553..867.359 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.152 ms
Execution time: 1111.742 ms
(10 rows)

3) Case 3:

postgres=# set enable_hashagg = off;
SET
postgres=# set enable_gathermerge = on;
SET
postgres=# explain analyze select aid, sum(abalance) from pgbench_accounts
where filler like '%foo%' group by aid;

QUERY PLAN

------------------------------------------------------------
------------------------------------------------------------
-----------------------------------
Finalize GroupAggregate (cost=47276.23..76684.51 rows=202683 width=12)
(actual time=287.383..542.064 rows=200000 loops=1)
Group Key: aid
-> Gather Merge (cost=47276.23..73644.26 rows=202684 width=0) (actual
time=287.375..441.698 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Partial GroupAggregate (cost=46276.17..47162.91 rows=50671
width=12) (actual time=278.801..305.772 rows=40000 loops=5)
Group Key: aid
-> Sort (cost=46276.17..46402.85 rows=50671 width=8)
(actual time=278.792..285.111 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 9841kB
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..42316.52 rows=50671 width=8) (actual time=206.602..223.203
rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.251 ms
Execution time: 553.569 ms
(15 rows)

Now, in the above case we can clearly see that GM is perform way better -
but
still planner choosing HashAggregate - because cost of hash aggregate is low
compare to GM.

Another observation is, HashAggregate (case 1) is performs better compare to
GroupAggregate (case 2), but still it doesn't justify the cost difference
of those two.

-- Cost difference
postgres=# select (82480.38 - 64108.32)/64108.32;
?column?
------------------------
0.28657840355198825987
(1 row)

-- Execution time
postgres=# select (1111.742 - 1094.933) / 1094.933;
?column?
------------------------
0.01535162425463475847
(1 row)

Might be a problem that HashAggregate costing or something else. I am
still looking into this problem.

--
Rushabh Lathia
www.EnterpriseDB.com

Show quoted text
#27Amit Kapila
amit.kapila16@gmail.com
In reply to: Rushabh Lathia (#25)
Re: Gather Merge

On Tue, Jan 17, 2017 at 5:19 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Here is the benchmark number which I got with the latest (v6) patch:

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.conf

You haven't mentioned scale factor used in these tests.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#28Peter Geoghegan
pg@heroku.com
In reply to: Rushabh Lathia (#26)
Re: Gather Merge

On Tue, Jan 17, 2017 at 4:26 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Another observation is, HashAggregate (case 1) is performs better compare to
GroupAggregate (case 2), but still it doesn't justify the cost difference of
those two.

It may not be the only issue, or even the main issue, but I'm fairly
suspicious of the fact that cost_sort() doesn't distinguish between
the comparison cost of text and int4, for example.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Amit Kapila (#27)
Re: Gather Merge

On Tue, Jan 17, 2017 at 6:44 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Jan 17, 2017 at 5:19 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Here is the benchmark number which I got with the latest (v6) patch:

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server

is

stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3

executions.

- power2 machine with 512GB of RAM
- With default postgres.conf

You haven't mentioned scale factor used in these tests.

Oops sorry. Those results are for scale factor 10.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Rushabh Lathia

#30Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Rushabh Lathia (#29)
Re: Gather Merge

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#31Michael Paquier
michael.paquier@gmail.com
In reply to: Kuntal Ghosh (#30)
Re: Gather Merge

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#32Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Michael Paquier (#31)
1 attachment(s)
Re: Gather Merge

I am sorry for the delay, here is the latest re-based patch.

my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:

explain analyze select '' AS "xxx" from pgbench_accounts where filler like
'%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms

This patch also fix that issue.

On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <michael.paquier@gmail.com>
wrote:

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Rushabh Lathia

Attachments:

gather-merge-v7.patchbinary/octet-stream; name=gather-merge-v7.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fb5d647..6959b51 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3496,6 +3496,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+      <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>enable_gathermerge</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Enables or disables the query planner's use of gather
+        merge plan types. The default is <literal>on</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
       <term><varname>enable_hashagg</varname> (<type>boolean</type>)
       <indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index f9fb276..570b26e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -908,6 +908,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1397,6 +1400,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 2a2b7eb..c95747e 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
        nodeCustom.o nodeFunctionscan.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 0dd95c6..f00496b 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
 #include "executor/nodeForeignscan.h"
 #include "executor/nodeFunctionscan.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeGroup.h"
 #include "executor/nodeHash.h"
 #include "executor/nodeHashjoin.h"
@@ -320,6 +321,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -525,6 +531,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -687,6 +697,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -820,6 +834,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..84c1677
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *		Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+				  bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+					  bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys =
+			palloc0(sizeof(SortSupportData) * node->numCols);
+
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	ExprContext *econtext;
+	int			i;
+
+	/*
+	 * As with Gather, we don't launch workers until this node is actually
+	 * executed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize data structures for workers. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/* Try to launch workers. */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader = palloc(pcxt->nworkers_launched *
+									  sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/*
+	 * Get next tuple, either from one of our workers, or by running the
+	 * plan ourselves.
+	 */
+	slot = gather_merge_getnext(node);
+	if (TupIsNull(slot))
+		return NULL;
+
+	/*
+	 * form the result tuple using ExecProject(), and return it --- unless
+	 * the projection produces an empty set, in which case we must loop
+	 * back around for another tuple
+	 */
+	econtext->ecxt_outertuple = slot;
+	return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+										(gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the merge */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+											heap_compare_slots,
+											gm_state);
+
+	/*
+	 * First, try to read a tuple from each worker (including leader) in
+	 * nowait mode, so that we initialize read from each worker as well as
+	 * leader. After this, if all active workers are unable to produce a
+	 * tuple, then re-read and this time use wait mode. For workers that were
+	 * able to produce a tuple in the earlier loop and are still active, just
+	 * try to fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	GMReaderTupleBuffer *tuple_buffer;
+	HeapTuple	tup = NULL;
+
+	/*
+	 * If we're being asked to generate a tuple from the leader, then we
+	 * just call ExecProcNode as normal to produce one.
+	 */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+
+	/* Otherwise, check the state of the relevant tuple buffer. */
+	tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+	if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+	{
+		/* Return any tuple previously read that is still buffered. */
+		tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	else if (tuple_buffer->done)
+	{
+		/* Reader is known to be exhausted. */
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		/* Read and buffer next tuple. */
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * Attempt to read more tuples in nowait mode and store them in
+		 * the tuple array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+				  bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext;
+
+	tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 30d733e..943f495 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -359,6 +359,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4521,6 +4546,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 1560ac3..865ab5f 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -457,6 +457,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1984,6 +2013,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3409,6 +3449,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3739,6 +3782,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dcfa6ee..8dabde6 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2095,6 +2095,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2529,6 +2549,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 5c18987..824f09e 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2047,39 +2047,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
 
 /*
  * generate_gather_paths
- *		Generate parallel access paths for a relation by pushing a Gather on
- *		top of a partial path.
+ *		Generate parallel access paths for a relation by pushing a Gather or
+ *		Gather Merge on top of a partial path.
  *
  * This must not be called until after we're done creating all partial paths
  * for the specified relation.  (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
  */
 void
 generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
+	ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
 		return;
 
 	/*
-	 * The output of Gather is currently always unsorted, so there's only one
-	 * partial path of interest: the cheapest one.  That will be the one at
-	 * the front of partial_pathlist because of the way add_partial_path
-	 * works.
-	 *
-	 * Eventually, we should have a Gather Merge operation that can merge
-	 * multiple tuple streams together while preserving their ordering.  We
-	 * could usefully generate such a path from each partial path that has
-	 * non-NIL pathkeys.
+	 * The output of Gather is always unsorted, so there's only one partial
+	 * path of interest: the cheapest one.  That will be the one at the front
+	 * of partial_pathlist because of the way add_partial_path works.
 	 */
 	cheapest_partial_path = linitial(rel->partial_pathlist);
 	simple_gather_path = (Path *)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
+
+	/*
+	 * For each useful ordering, we can consider an order-preserving Gather
+	 * Merge.
+	 */
+	foreach (lc, rel->partial_pathlist)
+	{
+		Path   *subpath = (Path *) lfirst(lc);
+		GatherMergePath   *path;
+
+		if (subpath->pathkeys == NIL)
+			continue;
+
+		path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+										subpath->pathkeys, NULL, NULL);
+		add_path(rel, &path->path);
+	}
 }
 
 /*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 458f139..8331fb3 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Add one to the number of workers to account for the leader.  This might
+	 * be overgenerous since the leader will do less work than other workers
+	 * in typical cases, but we'll go with it for now.
+	 */
+	Assert(path->num_workers > 0);
+	N = (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost.  Since Gather Merge, unlike
+	 * Gather, requires us to block until a tuple is available from every
+	 * worker, we bump the IPC cost up a little bit as compared with Gather.
+	 * For lack of a better idea, charge an extra 5%.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index fae1f67..f3c6391 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -272,6 +272,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+						 GatherMergePath *best_path);
 
 
 /*
@@ -469,6 +471,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+											  (GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -1439,6 +1445,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
 }
 
 /*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather Merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	/* As with Gather, it's best to project away columns in the workers. */
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	/* See create_merge_append_plan for why there's no make_xxx function */
+	gm_plan = makeNode(GatherMerge);
+	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->num_workers = best_path->num_workers;
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
+	Assert(pathkeys != NIL);
+
+	/* Compute sort column info, and adjust GatherMerge tlist as needed */
+	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+									  best_path->path.parent->relids,
+									  NULL,
+									  true,
+									  &gm_plan->numCols,
+									  &gm_plan->sortColIdx,
+									  &gm_plan->sortOperators,
+									  &gm_plan->collations,
+									  &gm_plan->nullsFirst);
+
+
+	/* Compute sort column info, and adjust subplan's tlist as needed */
+	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+										 best_path->subpath->parent->relids,
+										 gm_plan->sortColIdx,
+										 false,
+										 &numsortkeys,
+										 &sortColIdx,
+										 &sortOperators,
+										 &collations,
+										 &nullsFirst);
+
+	/* As for MergeAppend, check that we got the same sort key information. */
+	Assert(numsortkeys == gm_plan->numCols);
+	if (memcmp(sortColIdx, gm_plan->sortColIdx,
+			   numsortkeys * sizeof(AttrNumber)) != 0)
+		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+	Assert(memcmp(sortOperators, gm_plan->sortOperators,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(collations, gm_plan->collations,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+				  numsortkeys * sizeof(bool)) == 0);
+
+	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+		subplan = (Plan *) make_sort(subplan, numsortkeys,
+									 sortColIdx, sortOperators,
+									 collations, nullsFirst);
+
+	/* Now insert the subplan under GatherMerge. */
+	gm_plan->plan.lefttree = subplan;
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
+
+/*
  * create_projection_plan
  *
  *	  Create a plan tree to do a projection step and (recursively) plans
@@ -2277,7 +2363,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
-
 /*****************************************************************************
  *
  *	BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 4b5902f..6e408cd 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3712,8 +3712,7 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path.  We can do this using either Gather or Gather Merge.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
@@ -3760,6 +3759,70 @@ create_grouping_paths(PlannerInfo *root,
 										   parse->groupClause,
 										   (List *) parse->havingQual,
 										   dNumGroups));
+
+			/*
+			 * The point of using Gather Merge rather than Gather is that it
+			 * can preserve the ordering of the input path, so there's no
+			 * reason to try it unless (1) it's possible to produce more than
+			 * one output row and (2) we want the output path to be ordered.
+			 */
+			if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *subpath = (Path *) lfirst(lc);
+					Path	   *gmpath;
+					double		total_groups;
+
+					/*
+					 * It's useful to consider paths that are already properly
+					 * ordered for Gather Merge, because those don't need a
+					 * sort.  It's also useful to consider the cheapest path,
+					 * because sorting it in parallel and then doing Gather
+					 * Merge may be better than doing an unordered Gather
+					 * followed by a sort.  But there's no point in
+					 * considering non-cheapest paths that aren't already
+					 * sorted correctly.
+					 */
+					if (path != subpath &&
+						!pathkeys_contained_in(root->group_pathkeys,
+											   subpath->pathkeys))
+						continue;
+
+					total_groups = subpath->rows * subpath->parallel_workers;
+
+					gmpath = (Path *)
+						create_gather_merge_path(root,
+												 grouped_rel,
+												 subpath,
+												 NULL,
+												 root->group_pathkeys,
+												 NULL,
+												 &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+								 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								 create_group_path(root,
+												   grouped_rel,
+												   gmpath,
+												   target,
+												   parse->groupClause,
+												   (List *) parse->havingQual,
+												   dNumGroups));
+				}
+			}
 		}
 	}
 
@@ -3857,6 +3920,16 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * We've been using the partial pathlist for the grouped relation to hold
+	 * partially aggregated paths, but that's actually a little bit bogus
+	 * because it's unsafe for later planning stages -- like ordered_rel ---
+	 * to get the idea that they can use these partial paths as if they didn't
+	 * need a FinalizeAggregate step.  Zap the partial pathlist at this stage
+	 * so we don't get confused.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4326,6 +4399,56 @@ create_ordered_paths(PlannerInfo *root,
 	}
 
 	/*
+	 * generate_gather_paths() will have already generated a simple Gather
+	 * path for the best parallel path, if any, and the loop above will have
+	 * considered sorting it.  Similarly, generate_gather_paths() will also
+	 * have generated order-preserving Gather Merge plans which can be used
+	 * without sorting if they happen to match the sort_pathkeys, and the loop
+	 * above will have handled those as well.  However, there's one more
+	 * possibility: it may make sense to sort the cheapest partial path
+	 * according to the required output order and then use Gather Merge.
+	 */
+	if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+		input_rel->partial_pathlist != NIL)
+	{
+		Path	   *cheapest_partial_path;
+
+		cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+		/*
+		 * If cheapest partial path doesn't need a sort, this is redundant
+		 * with what's already been tried.
+		 */
+		if (!pathkeys_contained_in(root->sort_pathkeys,
+								   cheapest_partial_path->pathkeys))
+		{
+			Path	   *path;
+			double		total_groups;
+
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 cheapest_partial_path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+
+			total_groups = cheapest_partial_path->rows *
+				cheapest_partial_path->parallel_workers;
+			path = (Path *)
+				create_gather_merge_path(root, ordered_rel,
+										 path,
+										 target, root->sort_pathkeys, NULL,
+										 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
+	/*
 	 * If there is an FDW that's responsible for all baserels of the query,
 	 * let it consider adding ForeignPaths.
 	 */
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index be267b9..cc1c66e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 9fc7489..a0c0cd8 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2686,6 +2686,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index f440875..29aaa73 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5f43b1e..130f747 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -895,6 +895,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..f4dfb7a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2004,6 +2004,35 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan which produces sorted output in each worker, and then
+ *		merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;		/* tuple buffer per
+														 * reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index fa4932a..4c2ce74 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -76,6 +76,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -125,6 +126,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -246,6 +248,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_ProjectSetPath,
 	T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f72f7a8..8dbce7a 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -785,6 +785,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 643be54..291318e 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1203,6 +1203,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 39376ec..7ceb4ca 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -198,5 +199,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 7b41317..e0ab894 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -76,6 +76,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
 extern GatherPath *create_gather_path(PlannerInfo *root,
 				   RelOptInfo *rel, Path *subpath, PathTarget *target,
 				   Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel,
+												 Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
 						 RelOptInfo *rel, Path *subpath,
 						 List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index 56481de..f739b22 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2,6 +2,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -12,7 +13,7 @@ SELECT name, setting FROM pg_settings WHERE name LIKE 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 CREATE TABLE foo2(fooid int, f2 int);
 INSERT INTO foo2 VALUES(1, 11);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 993880d..5633386 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -777,6 +777,9 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#33Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#32)
1 attachment(s)
Re: Gather Merge

Due to recent below commit, patch not getting apply cleanly on
master branch.

commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500

Add a regression test script dedicated to exercising system views.

Please find attached latest patch.

On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

I am sorry for the delay, here is the latest re-based patch.

my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:

explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms

This patch also fix that issue.

On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Rushabh Lathia

--
Rushabh Lathia

Attachments:

gather-merge-v7.patchbinary/octet-stream; name=gather-merge-v7.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fb5d647..6959b51 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3496,6 +3496,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+      <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>enable_gathermerge</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Enables or disables the query planner's use of gather
+        merge plan types. The default is <literal>on</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
       <term><varname>enable_hashagg</varname> (<type>boolean</type>)
       <indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 0a67be0..6ac9ed8 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -905,6 +905,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1394,6 +1397,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 2a2b7eb..c95747e 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
        nodeCustom.o nodeFunctionscan.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 0dd95c6..f00496b 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
 #include "executor/nodeForeignscan.h"
 #include "executor/nodeFunctionscan.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeGroup.h"
 #include "executor/nodeHash.h"
 #include "executor/nodeHashjoin.h"
@@ -320,6 +321,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -525,6 +531,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -687,6 +697,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -820,6 +834,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..84c1677
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *		Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+				  bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+					  bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys =
+			palloc0(sizeof(SortSupportData) * node->numCols);
+
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	ExprContext *econtext;
+	int			i;
+
+	/*
+	 * As with Gather, we don't launch workers until this node is actually
+	 * executed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize data structures for workers. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/* Try to launch workers. */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader = palloc(pcxt->nworkers_launched *
+									  sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/*
+	 * Get next tuple, either from one of our workers, or by running the
+	 * plan ourselves.
+	 */
+	slot = gather_merge_getnext(node);
+	if (TupIsNull(slot))
+		return NULL;
+
+	/*
+	 * form the result tuple using ExecProject(), and return it --- unless
+	 * the projection produces an empty set, in which case we must loop
+	 * back around for another tuple
+	 */
+	econtext->ecxt_outertuple = slot;
+	return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+										(gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the merge */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+											heap_compare_slots,
+											gm_state);
+
+	/*
+	 * First, try to read a tuple from each worker (including leader) in
+	 * nowait mode, so that we initialize read from each worker as well as
+	 * leader. After this, if all active workers are unable to produce a
+	 * tuple, then re-read and this time use wait mode. For workers that were
+	 * able to produce a tuple in the earlier loop and are still active, just
+	 * try to fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	GMReaderTupleBuffer *tuple_buffer;
+	HeapTuple	tup = NULL;
+
+	/*
+	 * If we're being asked to generate a tuple from the leader, then we
+	 * just call ExecProcNode as normal to produce one.
+	 */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+
+	/* Otherwise, check the state of the relevant tuple buffer. */
+	tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+	if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+	{
+		/* Return any tuple previously read that is still buffered. */
+		tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	else if (tuple_buffer->done)
+	{
+		/* Reader is known to be exhausted. */
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		/* Read and buffer next tuple. */
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * Attempt to read more tuples in nowait mode and store them in
+		 * the tuple array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+				  bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext;
+
+	tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 30d733e..943f495 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -359,6 +359,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4521,6 +4546,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 1560ac3..865ab5f 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -457,6 +457,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1984,6 +2013,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3409,6 +3449,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3739,6 +3782,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dcfa6ee..8dabde6 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2095,6 +2095,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2529,6 +2549,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 5c18987..824f09e 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2047,39 +2047,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
 
 /*
  * generate_gather_paths
- *		Generate parallel access paths for a relation by pushing a Gather on
- *		top of a partial path.
+ *		Generate parallel access paths for a relation by pushing a Gather or
+ *		Gather Merge on top of a partial path.
  *
  * This must not be called until after we're done creating all partial paths
  * for the specified relation.  (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
  */
 void
 generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
+	ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
 		return;
 
 	/*
-	 * The output of Gather is currently always unsorted, so there's only one
-	 * partial path of interest: the cheapest one.  That will be the one at
-	 * the front of partial_pathlist because of the way add_partial_path
-	 * works.
-	 *
-	 * Eventually, we should have a Gather Merge operation that can merge
-	 * multiple tuple streams together while preserving their ordering.  We
-	 * could usefully generate such a path from each partial path that has
-	 * non-NIL pathkeys.
+	 * The output of Gather is always unsorted, so there's only one partial
+	 * path of interest: the cheapest one.  That will be the one at the front
+	 * of partial_pathlist because of the way add_partial_path works.
 	 */
 	cheapest_partial_path = linitial(rel->partial_pathlist);
 	simple_gather_path = (Path *)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
+
+	/*
+	 * For each useful ordering, we can consider an order-preserving Gather
+	 * Merge.
+	 */
+	foreach (lc, rel->partial_pathlist)
+	{
+		Path   *subpath = (Path *) lfirst(lc);
+		GatherMergePath   *path;
+
+		if (subpath->pathkeys == NIL)
+			continue;
+
+		path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+										subpath->pathkeys, NULL, NULL);
+		add_path(rel, &path->path);
+	}
 }
 
 /*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index a43daa7..832d0ae 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Add one to the number of workers to account for the leader.  This might
+	 * be overgenerous since the leader will do less work than other workers
+	 * in typical cases, but we'll go with it for now.
+	 */
+	Assert(path->num_workers > 0);
+	N = (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost.  Since Gather Merge, unlike
+	 * Gather, requires us to block until a tuple is available from every
+	 * worker, we bump the IPC cost up a little bit as compared with Gather.
+	 * For lack of a better idea, charge an extra 5%.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index fae1f67..f3c6391 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -272,6 +272,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+						 GatherMergePath *best_path);
 
 
 /*
@@ -469,6 +471,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+											  (GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -1439,6 +1445,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
 }
 
 /*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather Merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	/* As with Gather, it's best to project away columns in the workers. */
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	/* See create_merge_append_plan for why there's no make_xxx function */
+	gm_plan = makeNode(GatherMerge);
+	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->num_workers = best_path->num_workers;
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
+	Assert(pathkeys != NIL);
+
+	/* Compute sort column info, and adjust GatherMerge tlist as needed */
+	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+									  best_path->path.parent->relids,
+									  NULL,
+									  true,
+									  &gm_plan->numCols,
+									  &gm_plan->sortColIdx,
+									  &gm_plan->sortOperators,
+									  &gm_plan->collations,
+									  &gm_plan->nullsFirst);
+
+
+	/* Compute sort column info, and adjust subplan's tlist as needed */
+	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+										 best_path->subpath->parent->relids,
+										 gm_plan->sortColIdx,
+										 false,
+										 &numsortkeys,
+										 &sortColIdx,
+										 &sortOperators,
+										 &collations,
+										 &nullsFirst);
+
+	/* As for MergeAppend, check that we got the same sort key information. */
+	Assert(numsortkeys == gm_plan->numCols);
+	if (memcmp(sortColIdx, gm_plan->sortColIdx,
+			   numsortkeys * sizeof(AttrNumber)) != 0)
+		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+	Assert(memcmp(sortOperators, gm_plan->sortOperators,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(collations, gm_plan->collations,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+				  numsortkeys * sizeof(bool)) == 0);
+
+	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+		subplan = (Plan *) make_sort(subplan, numsortkeys,
+									 sortColIdx, sortOperators,
+									 collations, nullsFirst);
+
+	/* Now insert the subplan under GatherMerge. */
+	gm_plan->plan.lefttree = subplan;
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
+
+/*
  * create_projection_plan
  *
  *	  Create a plan tree to do a projection step and (recursively) plans
@@ -2277,7 +2363,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
-
 /*****************************************************************************
  *
  *	BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 4b5902f..6e408cd 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3712,8 +3712,7 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path.  We can do this using either Gather or Gather Merge.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
@@ -3760,6 +3759,70 @@ create_grouping_paths(PlannerInfo *root,
 										   parse->groupClause,
 										   (List *) parse->havingQual,
 										   dNumGroups));
+
+			/*
+			 * The point of using Gather Merge rather than Gather is that it
+			 * can preserve the ordering of the input path, so there's no
+			 * reason to try it unless (1) it's possible to produce more than
+			 * one output row and (2) we want the output path to be ordered.
+			 */
+			if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *subpath = (Path *) lfirst(lc);
+					Path	   *gmpath;
+					double		total_groups;
+
+					/*
+					 * It's useful to consider paths that are already properly
+					 * ordered for Gather Merge, because those don't need a
+					 * sort.  It's also useful to consider the cheapest path,
+					 * because sorting it in parallel and then doing Gather
+					 * Merge may be better than doing an unordered Gather
+					 * followed by a sort.  But there's no point in
+					 * considering non-cheapest paths that aren't already
+					 * sorted correctly.
+					 */
+					if (path != subpath &&
+						!pathkeys_contained_in(root->group_pathkeys,
+											   subpath->pathkeys))
+						continue;
+
+					total_groups = subpath->rows * subpath->parallel_workers;
+
+					gmpath = (Path *)
+						create_gather_merge_path(root,
+												 grouped_rel,
+												 subpath,
+												 NULL,
+												 root->group_pathkeys,
+												 NULL,
+												 &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+								 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								 create_group_path(root,
+												   grouped_rel,
+												   gmpath,
+												   target,
+												   parse->groupClause,
+												   (List *) parse->havingQual,
+												   dNumGroups));
+				}
+			}
 		}
 	}
 
@@ -3857,6 +3920,16 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * We've been using the partial pathlist for the grouped relation to hold
+	 * partially aggregated paths, but that's actually a little bit bogus
+	 * because it's unsafe for later planning stages -- like ordered_rel ---
+	 * to get the idea that they can use these partial paths as if they didn't
+	 * need a FinalizeAggregate step.  Zap the partial pathlist at this stage
+	 * so we don't get confused.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4326,6 +4399,56 @@ create_ordered_paths(PlannerInfo *root,
 	}
 
 	/*
+	 * generate_gather_paths() will have already generated a simple Gather
+	 * path for the best parallel path, if any, and the loop above will have
+	 * considered sorting it.  Similarly, generate_gather_paths() will also
+	 * have generated order-preserving Gather Merge plans which can be used
+	 * without sorting if they happen to match the sort_pathkeys, and the loop
+	 * above will have handled those as well.  However, there's one more
+	 * possibility: it may make sense to sort the cheapest partial path
+	 * according to the required output order and then use Gather Merge.
+	 */
+	if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+		input_rel->partial_pathlist != NIL)
+	{
+		Path	   *cheapest_partial_path;
+
+		cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+		/*
+		 * If cheapest partial path doesn't need a sort, this is redundant
+		 * with what's already been tried.
+		 */
+		if (!pathkeys_contained_in(root->sort_pathkeys,
+								   cheapest_partial_path->pathkeys))
+		{
+			Path	   *path;
+			double		total_groups;
+
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 cheapest_partial_path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+
+			total_groups = cheapest_partial_path->rows *
+				cheapest_partial_path->parallel_workers;
+			path = (Path *)
+				create_gather_merge_path(root, ordered_rel,
+										 path,
+										 target, root->sort_pathkeys, NULL,
+										 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
+	/*
 	 * If there is an FDW that's responsible for all baserels of the query,
 	 * let it consider adding ForeignPaths.
 	 */
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index be267b9..cc1c66e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 9fc7489..a0c0cd8 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2686,6 +2686,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index f440875..29aaa73 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1630,6 +1630,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 74ca4e7..0a110d8 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -895,6 +895,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..f4dfb7a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2004,6 +2004,35 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan which produces sorted output in each worker, and then
+ *		merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;		/* tuple buffer per
+														 * reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 95dd8ba..3530e41 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -76,6 +76,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -125,6 +126,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -246,6 +248,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_ProjectSetPath,
 	T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f72f7a8..8dbce7a 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -785,6 +785,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 643be54..291318e 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1203,6 +1203,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 0e68264..0856926 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -200,5 +201,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 7b41317..e0ab894 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -76,6 +76,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
 extern GatherPath *create_gather_path(PlannerInfo *root,
 				   RelOptInfo *rel, Path *subpath, PathTarget *target,
 				   Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel,
+												 Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
 						 RelOptInfo *rel, Path *subpath,
 						 List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out
index d48abd7..568b783 100644
--- a/src/test/regress/expected/sysviews.out
+++ b/src/test/regress/expected/sysviews.out
@@ -73,6 +73,7 @@ select name, setting from pg_settings where name like 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -83,7 +84,7 @@ select name, setting from pg_settings where name like 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 -- Test that the pg_timezone_names and pg_timezone_abbrevs views are
 -- more-or-less working.  We can't test their contents in any great detail
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index c4235ae..7251e2c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -777,6 +777,9 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#34Neha Sharma
neha.sharma@enterprisedb.com
In reply to: Rushabh Lathia (#33)
2 attachment(s)
Re: Gather Merge

Hi,

I have done some testing with the latest patch

1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;

*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GB

Test case script with out-file attached.

*LCOV Report :- *

File Names Line Coverage without Test cases Line Coverage with Test
cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %

On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Due to recent below commit, patch not getting apply cleanly on
master branch.

commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500

Add a regression test script dedicated to exercising system views.

Please find attached latest patch.

On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

I am sorry for the delay, here is the latest re-based patch.

my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:

explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms

This patch also fix that issue.

On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Rushabh Lathia

--
Rushabh Lathia

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--

Regards,

Neha Sharma

Attachments:

gather_merge_functional_test_cases.sqltext/x-sql; charset=US-ASCII; name=gather_merge_functional_test_cases.sqlDownload
gather_merge_functional_test_cases.outapplication/octet-stream; name=gather_merge_functional_test_cases.outDownload
#35Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Neha Sharma (#34)
2 attachment(s)
Re: Gather Merge

Thanks Neha for the test LCOV report.

I run the tpch on scale 10 with the latest patch and with latest code
up to 1st Feb (f1169ab501ce90e035a7c6489013a1d4c250ac92).

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- power2 machine with 512GB of RAM

Here are the results: I did the three run and taken median. First
timing is without patch and 2nd is with GM.

Query 3: 45035.425 - 43935.497
Query 4: 7098.259 - 6651.498
Query 5: 37114.338 - 37605.579
Query 9: 87544.144 - 44617.138
Query 10: 43810.497 - 37133.404
Query 12: 20309.993 - 19639.213
Query 15: 61837.415 - 60240.762
Query 17: 134121.961 - 116943.542
Query 18: 248157.735 - 193463.311
Query 20: 203448.405 - 166733.112

Also attaching the output of those TPCH runs.

On Fri, Feb 3, 2017 at 5:56 PM, Neha Sharma <neha.sharma@enterprisedb.com>
wrote:

Hi,

I have done some testing with the latest patch

1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;

*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GB

Test case script with out-file attached.

*LCOV Report :- *

File Names Line Coverage without Test cases Line Coverage with Test cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %

On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Due to recent below commit, patch not getting apply cleanly on
master branch.

commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500

Add a regression test script dedicated to exercising system views.

Please find attached latest patch.

On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

I am sorry for the delay, here is the latest re-based patch.

my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:

explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;

QUERY PLAN

------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms

This patch also fix that issue.

On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Rushabh Lathia

--
Rushabh Lathia

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--

Regards,

Neha Sharma

--
Rushabh Lathia

Attachments:

without_gm.tarapplication/x-tar; name=without_gm.tarDownload
with_gm.tarapplication/x-tar; name=with_gm.tarDownload
#36Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Neha Sharma (#34)
2 attachment(s)
Re: Gather Merge

Thanks Neha for the test LCOV report.

I run the tpch on scale 10 with the latest patch and with latest code
up to 1st Feb (f1169ab501ce90e035a7c6489013a1d4c250ac92).

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- power2 machine with 512GB of RAM

Here are the results: I did the three run and taken median. First
timing is without patch and 2nd is with GM.

Query 3: 45035.425 - 43935.497
Query 4: 7098.259 - 6651.498
Query 5: 37114.338 - 37605.579
Query 9: 87544.144 - 44617.138
Query 10: 43810.497 - 37133.404
Query 12: 20309.993 - 19639.213
Query 15: 61837.415 - 60240.762
Query 17: 134121.961 - 116943.542
Query 18: 248157.735 - 193463.311
Query 20: 203448.405 - 166733.112

Also attaching the output of TPCH runs.

On Fri, Feb 3, 2017 at 5:56 PM, Neha Sharma <neha.sharma@enterprisedb.com>
wrote:

Hi,

I have done some testing with the latest patch

1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;

*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GB

Test case script with out-file attached.

*LCOV Report :- *

File Names Line Coverage without Test cases Line Coverage with Test cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %

On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Due to recent below commit, patch not getting apply cleanly on
master branch.

commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500

Add a regression test script dedicated to exercising system views.

Please find attached latest patch.

On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

I am sorry for the delay, here is the latest re-based patch.

my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:

explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;

QUERY PLAN

------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.60
rows=49422 width=36) (actual time=296.612..338.873 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms

This patch also fix that issue.

On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Rushabh Lathia

--
Rushabh Lathia

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--

Regards,

Neha Sharma

--
Rushabh Lathia

Attachments:

without_gm.tar.gzapplication/x-gzip; name=without_gm.tar.gzDownload
��)�X�=k��6���_��~�x�����N��x���v|������r�3�G�I�cc���_� (� J"'���*���Ht7����_�{����1� (!�'U�T?=|C��0��@)|��Q[U�f��,����fu=y{����g
s'�i�aq3����f���+`�y���r�s	BR��\I�MBmE�?��i��������W����[�{�}�A�^�|z;]'���|�>� �!$�"M���r����&�M���g�?HN&����&YOo�3�� )h���/��'�$7����C���Y/6��d�Y,n��2]��Mn���������r���_~��+���,���z�>�D�2:Yf��3&�DU�0Yg�w�L&�-��u6;�����a�}�$��0$��D��It��Py�~�>�V��<9���a�����������d=_|�2��&�UN���v���1����'�r��������f�_Y�t9�,�{���M�Y�nRa7��D*����E��r?aF�~G�����A�d��NN���2o����Y�u���8�����Y�b����a�������>{�<i��U�v�[�������z�JT������x:Y_g��<�1��[y�3�\�\J���$����@Rq�o��{dX��f2�e�p���d3�����,��L^M��)r�S��������
U:�*��b)�^��3
���u�)��������z�;���%��$�4�3��}��
�2���z���|����P��T)
�Z�8A#�	�d�7t���Jun�Ac�
:���L)�J��N��,J:8;`�h�A��TK�t�4
e�G� @��&��:����V�7�)$U�Xyp&R#K:�8�)��ib
:���a8��������p9I��@��'jh��$�'Q�M
��~r�����|O���F_q�e�C�C]�`��#N�i����xJj�O��&k�W��]��\�)�}���Z�f�X�F��5��5���&�2����a��v�Hr�����P4
�X
�2[��E������z��e>�y�)���V5&�hL����R�%�{������H���\������+�Z��tS&�a�&�p���&1�]R���j�54Z!Ri\����w��`;��z��
��j�A�>��m�o�����<F���/�id�V����(�G>��=�a�|�}�f�E�7nF�o�Z�"�m�r�
-������h9;|�i�W(~(�a���'r
�3�����{�}���6_^e(����'g����u�<MNn`���O��^v���k�,������:f��!Om�B �!�����t�r�J'�R��`�N���h��>t�t,z�~� mZ��k�6;��Z�
=��.*(����JM��kN�0K��k�Z�*P0+Yb���t9Qjb�P��eh�A�}��@�sESi�*�F�c�Wd��d�����Z���*��2�j~V��y��u�L���.�X2���A�E{�z�V�kk����H�%�������C���$�Z������Y?�gW�(T�����9��VM��������L����R��j+c�^������7�i�W�b��{�A���n��J�}[K�t{3&�;�1�������2�z�����-�H7A� �J[����>t������l*�!4}��V}�5��FO�0�����9L�t�|���U����4���6/�6����K��m�@�\�b�q��_��r���{�U�qL$P����
�rc��r:;�r�S��L$�1��)-����r�W������ljp����c�� 
5���� ���U�T�:�*����]x*!(~�3���s}�V����"�$��a,E�RH����$��(A��ut-2&77�Mr���\\Nf�|�,6oo����t�MQ��2�����X�	Z}t��	CTZs(3m���#j�9%�ct��Y8��+2�bv��s����.&���[]U[Zg��������f2[O�����vW��b9����j��s������j��7��/7�w������f�_~=]\M��I������rc�]f�E����t�Z/7������U�!�l=��N��$�7�<����Q�����)�M��)=�{�����R�:�_uAE��W��I����A���4Z�H������4Q�����i��9h7rH#X�Fe���/1G���A,��v;W*mno�����#'�Bh�vC����.C�B����ch�|	����Y�C�� ���4���l�������d}y��W�sN���&���4��y{gK�!M!q:l�C����0[z�.m��c���*����L����0������3�-�)��\n�����\�-T�2)�=-K��S�2��up������?�Z�Y'����9��9hW��eG�j�K�/��$S
g����/�LJS8i�%��'@��.��z:�5�\R_%�+5��6(A���x�1t��9][��w�O�T�Y]Z~]f��A'-�PN`�Z��%�����_��$?={����/����]\��G`R���B���W���sb��F-}o�Q��ul�Fk8����NtX_�~�/��p�����V�%�j�����v�H���
#�=���nSf@�n���|������xL���Ax����j��V
��ZU�.����o//�o'7�{������uv�����5�WhqG�u�c��/�c�>�����A�����6r�e�,�5��I:<�#�H��38��d�F��9�a�nn�Mi}$�A������:�;�+6���*�P����7^���U���J����R�t��(pF�wb�8z>S��=-p<-��]WC��Y�p�����_��B��[B��>�LO~��I���H4t���*�.��P������7b�d
3�bTY,5������.��bv��_���j>���a������W�(QP]{�\}e����^�B�Y�,�/U*���0�����T~�I
���_��q��:��/F
2 lm�H��R����i�����H�S,���Hy������%:��R���I/sk�Hy��Oh��5)y�2�e��r&�(�w,j8�Yw����v~N��B���ghm�`4���\��?�e��%��^��Z�P�M�(��BO�2r��

��(��c
.���������3%z,����9��������j�sj�1r�e��mR2�dv��m�CK{+33��t�.�����������!m!p�}��d;�
��ph,Z)�TQ����P�.�Zu���Y]��h�.�zu����.�fu���Y]Au��P�.�!�jZ5�E�2��/a�;W�#&�b��>?�\@����������������n���D���=Ey�����/q2���+j[2���H4(�+��H�|���"E�>^F�����l���)!��J3��:�"��E@/�P�eapk��:m]��58j�V7�f
X���)`uS�ZL��V��4�i
X���)`�)`�)`uS��3u#��F�V�w�P����I�-[�3�A�����]�*
���o�A�Z�d8�����RU�tW�I7Td������&�������A�p�fO-h������|���hPBs��;�����);��4����h@��P�\j��dr�w+���O21��|wkYlKN�
�������Cv���k�_�1���z����{�����J�'A��d��y��?U�(�o��:�0��>+�
$�Q��"- �d(pE�J�a�m"M$i�;E�(1�F�E�
����,���Y4�D�@����A��!����$�E	�dmS�l3����C�v���d���I��}�1e�@5e�dTG_'��$
1�q )t��=3@���f���m<�L��hj�	;0m��;]�$�X ){ M���?�*I>5��(��Ii}v�&�A�T�h���Rq$t���C[(�$g&��p�4���T���P����fQ$4�4��4�;�����(�Bv���tl��6��6���I5�( 1M� �I�8*�(���o1$E_�I^O��m)\���q ����D4.�'�4����	GK���e�^v�V��� *;�X�grnH)_g� 	FUT����]�$Mh�$
�R�>.0b�R'1��d�x�!1�R�����	=�&����G�@������!+u�u���_g@F�)r!�Du$�����]h�$4�-�;�'.���}���P��K��v�/P�������_�������+|�}��9��+|j���������@�y�`Ljw-��e�PQw����h�
������V;?�d��=9���������}z��t����F�]RA�9t6O%��k�B�,Y;�<8��`�����.�z��do���v���9�:?��S-gc�QM��U���G[�P]��<�z@�W!~Pv����d�������_QSr���~���r8�h���4Q��a����V6	��9����U���aK�=������IkF��y���HhO�0��TX�_4�%�k���.Z�;�"�S,rS�B4���ic�8��S�y��$B�D��<��
�gpq�p����M}6���L=/@�9���c7�N���R����)�)3�:���1���G��h���2�v[=+��Q�%�<fjM
���\��WXv���?��J��!�-9�=����TkQ�����eFAj�o�y ����A�D�Z/��`� ���T��
�S�'$��|��)��S���mE��f����I���_�-�/o���U�m�q��[��W~������_.3�qS��g��6�������������������n��f��������8���/�]���^=������G?�T\�����-��������M:K���u��$��h���'��GO���U�&{e�R\���h%���a�f=�9�W�UxQp�^��m~e]y���g+���sb�dkvgo};9�����d�+��uO������<Uv�~�!%��$-�:���M���5�2
 �G�M���8Q{s��(�NJ�FI���o��m�������{��$�<o+|��iI0�������)I��$�Dr��1o�R�����v����<l��*��+��8����"b��VQoAYQi��Fb{�V)Q}r�^���J�D�E�D���L`�^"�9*��1�!�J�D��������o����y�V� ���'����������������_6�L���X�~��m���DE3r�����|,��X��� z?�UQ|
��"���������l
�U�q�z8_]Q}�RAi������a�q���y������=H�����~�����<��l��U��;��h�7��D^�������`}W��1?c��p�;};����m����X����g�5j�D�0
h�-2���b[Y$�ZX�����P^S�\~�����	Wv[V��N���9&qE������������K��%Q�y��_���z��a���1���m����q���>������F�\������O@P��*�O�T�������{6R�'����t�j����!WJ���c��{�1D��}1_v�������������*9����)q�A�������"$O�/~hkK��/������|����:�,V�l����	��v�C�y�r�YK�@�������I��'F��@R��X���T�g�Pf�f���a$sP�p���u����N��C��nm�WE
LNR��D
"��"e�:H���=&�m��lo��9�����XV�-��6q9Cb�|���t��4�TJ��I�X]a�!�)?R����|�jeG9-3�"=�"���K���J��=z�'�/�����w��.;�Kmm�V���96���\���t���*��)�z��f7&���K���J�D�A	�8�D����H�A��b��|�%�5n��^�q����fR����f1k}��t<@���) �\�L�W�{��G�,�~�_����1�]v]x������S���x1AR�ER4ro���/��P���s��.v���D 
��r��Na�DbsQ���&V�������S�`$����R%����$�`hPY�*�K(\���2f�cN�n5<l�
24^2J���W�a�!�e���0J�>&��@bB�F P�c��4�,Y(�g"i�a��h�4e
lh��`����
�3��xkT�)���:b��l@���������
T�����3�Z��n�����i��L��!}+�d�WW��Y��Ga1i�����Rd�@�vt��7:���05� �����%�d�;�<�(��wR���]����
vA����
����V�p�43.P|-0Ii��%`&�9������=:�@R~Ka
g��a��$���kII�*���6H�r����Q��
���Q�Q ,���f��������Iu�����������y��}�W&67�J�IXJQ�B�>����zh�E$eF�����e�6�>�[�k�xvL�3����>%����%�\l���L)0S���c�.D��E��q�X*�`ap%�D6a�2[y�����`^�����R(�
�	Z���^f�s�\/Z�{0O��H����������9��:��2j��O5-MD�S�����<�������Hi(PL��j�R6����M/��^�g�lk�,W,)��F)�����j��5��*#��1u:��4�t�@�����"��r�|�5��L�Ag��|O:`�����U_]����N�K�*k�L�t_P}�Q2�����4�@o�F���fY�����d����\�S��n�%�Z���L����Q6I���v������6P�����Z]�m��VD��jR�c���<\�9��h=�c��^�J}d_`�Fu��t9�
���P'2A������a@��x@���-��#P��r�N?�����[ (���g[��@�Rii�EK���/���#����?(*E*zJ�Rr�RY/,�-Z����/r6�f�"5�#�7�*�������zBFv�7��+1
f|��[?T+�����aq��!
lMX
�K��T���^�^�N0B!��e���h�Xu-��0��1�5��e<2��`��J����y�oIi+z���lo�\���;���ypk�����\@!�l�*�������2���jD;<�.\�9 ������1���z����4�1l�I4V:3f��yk:t�.0j{yS���^4G��z���`.��
��RKq[��osp������O�!�v���t�'�8=��hA%�q�!X�ZKX�_'�PdSLt�k{I�^#����)/*��%x�"X����DhK�v���RV�_d/
��@��"�F��3I��t���k/Jf���E��P`��_�����r.%#��()z���,a�3/;����,������_���9��E*PP1+�_x��b�{��6X<�|��_�}t��*�p���� Uy�
���e�f-�V� �����C9�2��pI�������o�&��}�������A,��x����2X]��Q
'�iRwVN��)��8�+��<yz���������99k�w�G�Z]��h�tP)h�%�O"e�,#�+���1R�������� �������"zg��!4g�d
����u��o��:(�A�r�����q8�\"��C�]t�a��
��/��8�i�(�f��!��|^���)V}Ic��UI�9�\0��{�����H�Er�u�{^\�rK^�%���[(��a%����w������bV���@��#�H����
oi�/��yh��n;�N����gQ���~B|��Rn-XD[K3w�b���WE�2�;X���Z�?|������t��x��i�2����{sNtZcn>�U�E�<�����lv�?�5�j�&����*���,�i_��]6b}[,�����p������<z��A%(�65�z��X�.M^��
�V<��\�������&��?1q�%N�R��o���9l������=�����w���<zx����7�����J��j��
J7��`v��
�7��`�+������Jk�5���E��WE����}����=tr���,
6k���ae�5���v����K�P���.��0x1 �����Qse
�3��I����@�eu��5�t�Z"V�>�"��5��3~)�gFU�g��:�����������i
��7*�C[h)��"����SU�W���VJ:N�����pa����1<|:��v�d����J�	]%3�cn�}>��|D�L��xD�x�0BI��!�����P:�-h�����[p�� �.X��jae��S�'�,�����,F	yE�DD���k�b����������$�������W���`����1L�^�d��U����w.�Z����)���N��k����G��nk�|�/������N����wor�����L��v���0)J��!J���Do-���KL7��� ����s����������'�����B�����#��=���mk�`pM���MZ���.�,�e�w}Q�<J��\R�����M��D7d��LI�x�"�����Q$������w���&����*�e���}����
�`�`�0�a�H0Z�����5��H������"�o0i��M���4�������t������{���u������)�gb�Hie�'X�����R;��R���
�V,x�����3-�%\�B6��d�����7b.��3��Dm��e�)������8F2^R������Q
2,i�z%kH�7�
S�wk�
���g<��]X4�������\�����~�;!e�F�Y�DKFNy1�����q��4D�`��F�1{7�UB+0���FRz���t���K"��~���X)��}S$�qp��U�Z�[���x�B�tY{���6��B8�\��3V�1�HLh�.�K�]&�}��� )c���~�MH�D�J/�[Y$u �,��	��cl�8�p]D=oV'PDI����<��U/��0����
����"�����
���/p�/��>q��6���$�j����]����Dq��O���Y��Zr�`*V YL|?�x���Bg�Z����mhC-���UM�3>EU:������0{�-��D���������km�H�`c*�T������9�@�~#2|��K�M�$���J�&����d/�s�����'����(H)�pS@5V��{Fg'd�)��j��L��H���U���gkXV������7���
�m��]Jw�t�������5gK���{���j������"
�9B)��B�!G�zy��j���Xi#br����������{�
{���,��������[�+c����}��13�N2F�-e��3O �(k-��)UL]�,��I�f8-&����!��O���N&:-9��U"�B�A��9���+��reX�.P��E�G#�����g���peX*��/�!)&d-��KIi��
���B)�����eCcfm@���W��TR�L��}��[�|���l�4�����BT�q��������T����F�y����P��U��!|�U�m���m�C�������B���;M���m��L��HJ\e����/n��rvY4�6���j��1r���U���=�AGe�U�K�u�(Iv�V2g
��Jg���U!G�=h����	�����z��t�M�);PTk�P��4�FI�#��xI�����e�c|8[���o��1�#S�,���S��V�J���p+>x�����ns��3S�K�4�5�E�F�F�*>
W�4���L����{����+m=������������/�����|�s�����������������/���
��[�������tow������E�����
�7�i�SF�6G������XKO��n��y���vQOM�2("��Y�j~���|�7�
u�'��_��O���f�p0��m����'�h����Yz��/Q�G�_�5>�����-Z�o]F�-<v��N�OVj4�B:�����)u(��i����9�3����$�RYKIO�>��j4a&�?*`��������>$ P��Iu�g�����^�����,�C�/t�X=�����(�|S���b�E#F/0p���*����1���n�D2���W�Nlh��j�]y�+�G��`N4�N]�dZ���4�n����1��z�k��k?m���u�/����.`^F�9{o��y�[8�#�w<���O��R������������B����\��A�%�?ZYEpL���V:�w=n7����OS����X��15���)���h�Z���a�E���
btdt����:!��/�*��ib���W��O�O�1��1��t?�:u{�{�>��G<<HO�����!��R"�S�%r?=������������|=�;z
x���������bs3S+��K��rm�.�����F�Iz���.����������n[�>8+�m�Q����{������Z��M���k����?�\$�_h�����m.���?|�����>����'9�p�m�o!}�QkL����y�TR�z�/��������{Lu:�@����*b7�JT��.��s�1�)����%�LB�t���G������P�����N`M�-��K�U[���,�o�|�2D8������T���s�F$�6|C�q`�q�m�I�uF!��N2��Q�S���o��!��:[�<catY�e��v`Y�Jt=�:��g��ry�B_�X�yt���(��v4�*���(�����mLm�J��g��Z}�Y�`Dn�aGkF0�q4���)�/m��c��{%��B�}Re��g��"`6)�����Y~�mg|�[�<��Z�W�p��4��=�x������y��/][_Z.R10lgW���Z3������!�`������("8k��Za���^��t����^���T^gRu�dJ��
����2�Z�h��"'�%��p��2ANH.�X�8��&�4�%G|���Y*���{��o%s�Mg��u4���|�1 ���0�*	GU����}�SM@��e�4T���4!�6�J`��w+���*���_��X-�`���,p�H�I\b�x�� �;5J��k��:�X�����k�����e�:��CiX�%�P�b�����c����Jk#��)g�`��r"������j��r����z��+J)a��n�������f� �Y��,T4V�@�����$H��ei��q�
ePY}�tkZ:���'����)��]��=�]a}oP�����?�.��|���$��pe$\tsUV6����0K<�J�K|iiv�I�C��ZO{1�i-
�Ei����o�X��
����!�l�~�����~��F�GF�?�c#���rv������R2��g9�Wsb]��F��P<�e}����-��b8`zr���2�
?��5Q1kF�:����3�,�"��4Y��:�������y+���
7���_q�P�q����o~��������c�>�J���&�_����@v����X�����`�(sN?e_0r��z�zJ�����R�Z.��������a�7^��J)
:g��\�,,*���b<"5�`"|1�e��X���
��
���P?�n��%���qR�N��2�09;�$���h������@)o����/�2��r!l)&�4��G:���0"�_�����H��)3q<V�����G��"���K��I�2@q�	F,(5xB�0���}.$�R�X���)�i< ��C/��0�t�G�s��-�M�����t�R��0b�>�>3`X���K��%��S�(��l{~���`�hD�3�5�y�w����i���3$��$������E�B�Li�~�S����q�-��q,UO��%�R�� }y�^�0M���}��]U�4?>e$�G�S��I�I�_�)r����|�4?b�C���/����L���E_Rj��s��)�>���5ZT�G���a
Z��/E��`mz!
+3�����y�j�_��Z*Ke�/�X?"���~~e��8��HB��k������fl���2����R�����,�G4i��k;	��}A����#�0��4���5V��_,�LX�b��R�Z�)�q<���/*��Q�������x������k���7���:������>T���1'zm<��2��I����<�-�S,��oL�E�����8�KE�h�?k�
P%�N��o���P���4��/��q+�9��7��*���g����@�d�Q������yT�'��x�����<0�F�
�N8�����pW]��D/�-5�^:���y/��\J&J��4�f��*��R�.2�_��<)�!�	�������f������L�,�W�������I`���s�[Xu�I1?������?��I�����y<��=����&
4�@}��������-���SQ�Wk@j�oE_���+�_"3�������<'��G���`2N�I��?:�zv�G��q']��}����/��M�kL��]���A?(����I�~�S��Z�+'��u���W����L��)���
f, �
/�\�o�����:��g��S��*�)�[��B!$�_��i�����yz��,�3�/1�-Qa���~��)�@0AUX����������x��}��@�����>��m���KOb%������f���D����{�E~��_�����7�������BP�)�4�
����]��:�PJ#=�}0@������
���}��/�{��/�a�4��|���fH��z���\�o�P������<z�4�j�egX���>�41�^�*�G�h���_���}��%����.��"���7?e��}Xt��,�U���$�_�_q��	���c�r���_J~S7:����No�~P�%�2D����S���xZ�k����E���=k�~iA��_����#N��-��z�����e�_0�3�8x;���V������-0����������5z���h�}����e����p~���[�L��E_�� H<?�����a��0?��7L8��}�`E��<�'it���Z��}
��v�h���@�o�??���T���g����
U�4��4BT�#X�*��>|���5��_��0�F����o�;&�gZ?Q��6�Gt���t+���������*�/�g=���`N��)���1��$��� t���:t���>P1,���|s�����(�����`T�v$
~��c���G}�+�U���}��jJ�W���I����#
�����Eb�a`:OL���!EW������OA�!*�[�+��
�5y;V�=V�A�i����+�����qO��g�'���RI��]Z������O{�(c��1�}63hy��Y��T�
��?��u�I'E

�S'yI
���� /i�<dx�cela0t�?��;����P<2��Z���k�l�g[��`�I���XME����*ey�g]��,��w^����>��V��,�o>/AV!V��(�y2�����jeM�/�/�l�H]��J�.�R����u�3����������y�FU��;��y���^�vc����a����Rm��J�����#W����H/��a��A��O���(�j�t J�+��M�?Z�v"�~��A����I���������+�t����hK�(u���j.�x�yC�0�=~4"�yS���Z��>�~�
��5}
�~1g��b�S:2��k��9v��������&��>������4?��4��t�=��&���
����~�8?����>�A0�D�V���i��MqK��k"��(�L� �����x_!���FG���O{&�[���|+�)l"���o�<��iT�_��z���;���3h���>D1��������}.��� ��7���}���c�K�J����,����2�w��\	�������B��J������?0<����A%�.�Q��?|�6���?U����~g���e��	V`�]S��h�^�W+�iB/���&dJ�5?�&cF`�/��U�D+�S��M��Ky7����C�D�9���A��<��i!���K�3,�L�����i!�7��K�<)}/�1?w� a����<��U�A�W��A�L��^���y���_p�%o+��$��8���V���rd
�����U`FW~\��qc���+>��1�F���`DFs,3�����N	�������.f�����JZ�V��%V��zA�1S��
����������4`������93��R����%Z4�`��1�k
���V��D��X��Y �r����@�#f����	V.��������6����Otk�-/�x��y�*$X��Q��k���1TD�v�K��/%[C�k8�'(J4���
�k*��������v��Q������#���+G���l��'����x���Y"�B�.�'� �q& >����"�-fJ,V��R�T��r�~N(t�����f���T�������l�u��2@�� X�
Y����s�Ll�d��A'�9�s[��F1d[U�\J��.��|[��
/�c)J�,e�pCe����@d)#���cx��>b�C�>�o�,��Wxi�S��������`%�Xp�����)�rf1
��Lax���(Y�����u����`tI0s��A��`���Ye0k9;8gB����r�e	�9D*����!�d0�,;1R,Yy�O���l�L�9��?����/��n�����J����+~=k&�����)�I�,�{:���C�������Z��
����1�G�V%sslji�\l������4�$1��1d������*�E����])��%,@-md�����Kp��y�����#��/�
�'r�������Y��������is�s+G&�QF�y�t��*�����b�|���jGM����2��R�4
�k�s���^6�������P�M�t��~�U,����k�Fn�!�+�{�H�$���E����8c�YN �����`l��D��(�&^�F���/'��1s0!�-��Y���W��KN:3wBi-kn������19(�����5�\��\Di�d�b��o���h)�3�1{�a){HG�����I���w�G�~�N�m����i��w,�tnT�y}��c�Y�W�����U;�����7in����q��x�t�p�w;���n4hbe�n�Xi����&�f�p�_g;�4��_W�"�'o���e�6&tc��PC�����������a� �x$K������#�����5�M�7	
M�)��-7�B�����6�����+�5�7������G8�����r9Vc��R-yY�gp�!�
h3a�4�BU����&����A�*/�q^����R���}:Pi�|O����n�\�c�
��q�.���<���&y]/��c��v��^���u����4���u5����q���.6�P��p5x�\WZ��s�M{kY����
�~]�8aV��x�t<��:%����I7�,1��������T��$f�G�}����yl���W���o�s�����L�p�[m���������+?���op�����l0�!��
�����������J��%_�9�6l:+�-��X��8F+.�)Z����� �Wi�����F�������	�R$r0np���R3xHkR��1��?A��b���e8���_���Mc	Xn *Pv�
*�
*���%����JY-��$������+��u���5��
���XC�=J��c,�5����gK�p��e��|�~�K.a1��br M�9�Z����cc���������E�%I�0���rTM�c*���>��x����9�#f�OA�c�c� ���n����b]��$��*���}
����E�f��~��t����%<}���V����g�������mA;������RU��LE�R����E�<>�i�iA����*������}��7Iy���,p��&�F�4���������s4���N�nh�rE���O&7�B��������p���D�����0�z8�d6(>�5�����O������G�
���H�����?*=�������
>p^aZi����{��������7���!"��Z��_%����6�����>��Opk��l��i9�
4����N��4�7����v��K�jw���o��3�y�E)��a�����j.8��KERpo��_
`U�F��c��mr/�o]k0.H�U#lX�O@��!�%����%�������r�}>���;s5Y����Do���av�����1��}�T�~11�9����F&z��b���_LA�b���N����[���8C%��\!
���]��+�
�M���W98��=����g��0���e[#/sb�������H��r!&��aXS! ���n����p��jt<�X0H�+�S�-�14A��*��_����-TXC�DKFNy*�����,�%�����������Y���BC��W}:���%�[>>\+����&�Nf���\+�}�_�s��da�n!���6�>f��dR4
�-��*�}F��T�����L��66����s� �[��"Ynt.�+��r!/����"��k�I��6���
�"(��������Z�P����_[�1�[U�ZQ W��!����*�hP]��C�nX����k�����8'�m(��@9����.#�	P�n,�hl�l���!�������uJj�|4�.��c]���R�-F)��$�D�T��=i���V��y���K��.t('$��P��
V�H��k�J�������U`O0��vWqj3S���Iq���`�������r�)(��-^L%�(Y���6��V�2?��>50����%�W��9�o��6F��6�T���B	�����w�:4�JA����K�2xy�*Gp��+^��Maf��Y�`���[���;�am2����
�s�.�������b�YP��L�p���8���Y����Pm�(|�F��%��]~�A�BL��E��}G��D� <teX*�z�LQ��W����`qLreX���x:�j	�����g��x���T����!���_���f������2����V����nM���06�����x�����}��[��{�Yg\���zT���`U���^�������ozgu��n3v�:���Tc�]p�}[|�}[��k�#��*�$HsW��y�����o�(�j���A�a������I�E�o�/��s�}�z���W��_��������
��������m�����7�O�m>���;z�C.w�����3$�HD�6$��V��56"��z1�vC���}9M1�	��x'L���,��Mq���hg����@��14��A����w�����kH��W�/���7��Y27�eMJ����zr=����2���9W����|�t+CiU����4��Z�����ry0�k�2��(-YP~x�����`���r)G��F(��r�L~��<?��>=��?8����c��F���;�~o����f�?�����T����w��o��L�O���N���wO�?|��Q���e0�C���b����U��0T��c�w���\�V�g��
���<���-��v]���DU�Q�B&�X�l�1�b��|��%�FEo��x����X�}l0}��r���]J��2�Eu�������V)Z���\���Z�;��
�mU+X#}L��Q�JoR@�0F��XL�<
��1�P��jl,�TE�7F�f�*��FMa�8���GXF[k����A��5����y�diK�dcC��+;�C|$�z���
��3�{r)������_�3�oo��mEF&7��<Z3�����5���2"_�������on���Ov���~%�f����o_�`���"�KZkV�����m��AP�o!�,��>t�6{��_Z.l�^�v(�B^9���)��Hc�&�c
0�e�Q�� ,E\y�r��s���:]�<k��-������a���\�������n��l{��i%w�8D��EeQ-�K�L`r�!zl���XJ������-�@����j	|��{��o%s\=�B��	�V'b�F!�W%A���{��P��X8Q��^�0����W[g�1EC�a�_]d%��V��B8VbA����l��`���>�K��m��VKF0�X.&E�4(,.\~@��F�4��I���2d^���	��H(E1�Hb����L���Ik3T��b�,b����~���8��>�-�R���f�05]QJ	��s�N��k��R�a���RX�*h��4fVhr(C��Q��e��q�
EP�>r�2-�����L~�]d�.@����>������<�.��|[��L�]	�it��Mt7a!����b)T���4����������C�{Zs�hQZ���[��[�1�=n�F�'5��4��{��|�A�=��H�_�8I���n�OmG��#W���5������ �������6���K���+���z-eE�u�-��b8`zr���2�
?����~c I�:�vA��Pf����&V���e�@�o��J���x�G4`mU�q�5���'��mg���x���$M�d1�>��N�������7���w2��?���\��_Xo�H��W�c �
�3�t�TH�'%&�k� �u��i�G5���+�\)�A�������
�x���������(��X����S��x�rL������`�x'e��������)'i<�G��V��BS������b*�����)�4Z5���Hg��F���}�`���W�WL���q��ef<"��d�S��R�0�$x�8��#��!w����>�X��ON)��q< ��C/��`5L�G�s��-�M�����t�R����E~��,���t�h�f���l{~���`��R�3v����_�����Y�g<HR�I�jA}3$��,���������9�2�[�%���
W�/����f��5z�4	{"����:m�����<fr�����LH�_�)b����|�4?b�C��F���Rc����/)5��@/fJt�b�b�����p��V�%�K��?X�^�
��C/��,�&�����T�<�����#������WV�>���}a�$a	�vJmOP�o.�j���2�����������,�G4i���:����2��$��5��i<`X����
���b�g��3�����x�La���6>|RQ���/-HT]�x������k���7���:������>T�P����h�������&EX�Z���h9�b��xx�`��(����D]&�KE�o��|T	f���>SB������k:�9�]��]E����[��������C��?b�!A�	t���73e<���e�V��p�y�Ef�1�����^p6�^:���y/��\J&J��4���t����t�a�r�9H����M���������0��������
x������1	�6�s�s�n4)�G3��jct�P��$�W��t���<����lt�?��<���@+�k���^*P?5x����V��
����%2�?���s�a-x�o��'���/���x�gG|t�w���Y����I�"��4������@J0���/
��4�7:�y~�U�>��e���2��}@�t!�z��~����W��+��-�v��V'|�l�x
K�U�9E �~+��(�����>����x�>�B/���=������i��`,��2�T��+��}f�H�*�G�8��ot�H`�p�����D�'�������~~�� ofL�M�����E~��_�����|���H/�/���@c���m��:�PJ#=�}0@������
���}��/�{��/��v��I���^P3$a~=�Es�i�7�(��p^�K=zU�����4!^/����dz�<%����A`P�W�������B��.�l��{�SF�0^��E'L��_e|�O"�u�W�����>0%@�J���WX�IG{��_��M���D]���?�~J��U�jtM������>��F�!���,���Q��<�$�����K���Y�E�6�p0og�q��6��V�F��4^��F�p������"��,�30�OU�q+�����P��^��y<
�t����&�iL�>f�1����y<�H���G���0���k0�����E.��C����7�����>�h��W�b������2����|(��{6��q������K5���`���s�1a0>����^���<�3?��[�}w_�7T�q<���sJ�M��x���H&�T�V�������[����a�?�C0�����(�����`T�v$
~��c���G}�+�U���}
��)�_�~��'
�sV��((�6�O}P�����<1�}��o=���~�<��S�e�J���J0eE�mM���q�}u �(O����^�L���{������@�`!�t�K��S���{���c��eL�1&���f-O�#�s����U�V�:����B�A~�$/I!q<�%��g�{���-&�n��b<Xh,
V���V�(��3����>yRF����5fGW���>h�JY��Y�+�����hb"��}��2-����K�U��U�-
z�L�7�-�ZY������;�7R������}`�`�L/���5���<�{�����*�����ma�r	��q�e��0�	Wq~����QR��J�������*��N
fP�����G� L��
��s��@Tm�j���_��\�i������E����_��||��@c`!e{�e/��*�iw<�R�c��6��\����&��a�z�hD�������Tg}����gk��b
����td���s�T{�����M��}.%�����i~(�i8/��{��M��Eb�ai�dq~��}�`������+Z��?��<�X�Dj�Q,���A�KA
;K���_�6:�]~��3���}`�x=��[�Oa�/���O���R85�s���yl���@����!�q]�G��0�s�����5XFZV��7*��/1b(������~~
�#���X/����y���o��A`*�X�
�;��������M�]�M�1���#tr���7Q���n*K�]����{s(�nL���2�P}^�[<X�k����h�.�Lk�05S�Z�/���+�/�[�\V�&9������J�fh�������<]���������X!	+1��Z������Z�k��K�0�����d�`�vo����@��{�,��6���X��������2���kK@�6X��#��������"y��i=X�F����4
�_�B��^-��Lm�������1b������Z�������f�#���IkH_���caTW��@����$`X�d��g`�6|���)X\�!��l��l�����.5�����|+�S[sFh|�jYx�p�Gd�.h4��q;�
a�:t�0���v!(:�T�C��:I����r�2|q��O#=�X'�;�T-Hx������!�a3X��$�	�S�������t�)RY�y�C�DC�96Vp��Q�K�%�z��p��C�~7%
�c%I$���.�e3E�K*F��\��5�
<��i�=�+���+���c�����Z�����u'���:��4����k��o����SY�������X��qA���h<V5���6�2�B�������z�- ��$�X�)f%���5%���@ u#m�.�x���Zd��9C@��R�y�a��2	�$�Z1���"��1���� ��Uq�]
h�� ,ad#l��AH#��
H��`U����I-�G�j'�����
9�>0=�X1V9���6��,l
�;�������_%�A-�{����[�j�2?0���p)���(��2�^�R5��+u	�����@,�)c��8���,d�U����1���}/a��3��Fe���\L��k�����cD���Q��������_5)Q��������[��Z�a�5]t���
���`�'�
�m�������BW�Fb}����	���&�J������p��K���:����cyf� ��������8r.�0�gZ�[��`��jT��F�����]e�UtS�(������NUy1�w��UTS@�u��K�s�f����XQM�m*��{���-�qM�!_�zp�~�dr{�33y���F�y���[�CrE�5p\�.������v�]<����7����-��M>
5�*_����A����������o#z��_��u�|=>�_��AG��9�\.5�&fQ��l�����;���,	i�gg_�M��f�y�P'���.�4p1K0�]t�$����p����`uD�2����6%@�hM
Z;X=1�2X��!��V��&��I�����x���,���\n����:���r�H����y�Xp��)����Q�=��O�w]��f9�[���P�VC�rj�?��+ ��v����:�&�a���e�./�44�A����0��v\D�Z��rT�����N���L}M�u��`
W���w~o��\~F*{^B�`�(:	��j��~g����Y�
gx}P���^�d&��`�G������{��E<��..���v�HcG��Dc�q�1
���2�����p�%�I�q%�p��8[�=�����F��R��~�er�����������7�D���m��<smz�J�~�&*���u5���`�`��2'�o������{G{��4l��uhG���VK������n��_���^��VP�?jA�9U�_���_]���5G�T�2���,�C����aF�0�)�d�Dw\��Y]�_'[������lq����AT�:�f���\�)S.������������I��
�}�Lv�_O^�������&'���{��7���{$]P��3��|o�������m�:p��w���1�����9#$�%��`�������:7^=�i�����3Nw~z}.�?�����;��^���:����������r���r�R��v��_o�z�7�v���L.�.�.P���o�9��w6y���_����������9@��8\39;�OO�N�o��9��}�VA��u ���Zi�`P�v����9��=yR�����_�������9;�>��B6U>+cB�����na��m�o���c)����C�Dq	"���Z�F���C`�q���r�9?��a�����=��b��p�z=j
�w������7�N�~�����^?���{r�������O)R�6SC\`����hF���3��f(q�P�1�W�jE�@�!kk|*�g�<�q}�����O:p�~{hN���;����[L�|x����K��O�o><�^H�{o�Q'�V���l*�K��|qzp|q�x8��=���}Y�����q h�w)Z9��C�����"d1�JI�K�5k��^�~�����w���z�^�N^La�����b�i�q���f9;~�
8:��;)v��J���P��Dzt2�2|���j�8h������
�s	=���lh�__\<{����u n<�/�U����~6���W��Z�[Y�K7�8A@�	0��������'67eI��]��j_3E� G��������������������O���/��:����8<������~e<Z����0��K2���y�F��x)#"�y$|^�q,8��q�!�#���&�Q��:���p���k��q�>y{����z������������Q��c���9�D����P�g��r�s/�������&7B=G0��]jq��PS,�0Kb�F��F�����!����f�������^��x���i_���������6|t
�95�Ei]>����l#w��;�h������������<U*����.=��N��b
S��tI��s�����u�6�G��w��VL�W9p9e�BI��e�8��H)c�[��$�A}?<;`������������-`�5��}������s��$���.��<���������k/6��������_o����&�4�Z���+S)\
m �(�'����B��|���(P�Z�
+�VGvN�^�D��[�S�Tj���YL�c�O	2���'_�m������_}�x�����N����mPm4*�Sj���4���p&�1}�)>�<�>I{�>>0@�h0�I�h2dB�����w�l<��EJ-�b���^�]����������@��^0��������e��Lx��X�F
����cxH��C���5J+���E<��������:H[���C��������O�~�u1�S�*��!������%v�����mTu�pc���`y}*��C^V2��:��#�tz�,(o�65I�����ym����!������������g�T�:���������;w�^������g;'h����3��)�^b�Iz;m��mW�MV�VX2�kS
�*;����/���J^�������g'���yx����;�/�����F��8m*b��M���~�zo{�s����'�!�a�2S�Bp�N����w>�m�_'��N6�?���H=�?�q���7��]���m��`���L�L1-�H�$n�}yr~|�����^�*�>��k^$:�C�]t����/	�(�!�^�5.�������y���	9xuo����������'����/�=S�;{�>yi[0R�H������f�FU�p�-7W@�k�����8\b�����*��
��X�����gl�������{����t�~��/�7������u�u��n��O�i@C��WS���\n�����d:M���
#��B1�O+�_�o���C��*����H�������I�����Mr	������^['_���02�^��o0����L��T7T�(�h�w�����K��lU��%Xs����G�H�����Zv�N����/������������������G�������{o�����eP�11��M�(��
$��#�61nH�G(-Jk�mZ;X����,�������1�`3>������I�c^�J���z��j���*����n��>�&���7�(�^5��"��K�4s{?��zzg�^� ���S��_f���_���CW�@�O�PU�B�S�=�-'�Y���Qb6gt}U*�/F��8��3<I�c�#y&��9�I���|��Y���S�oI����2:?H�1
����h��O���&7UA��oG�M�i}�(��\s\~�1�9@��?te�*d�y{��v�$���'���8k�W���hr|8�@�'��l���	}#�d5�[l <����O7(3�A�4=�V��<g�� J?�m��bk���1�n�c������J�)��	����f����X�W���d%���rp����9��Q|� �1</s�wSj��4�����Z����'>�����PTvd�����A)��1��:Re'���Z7�&\Z����8�/�hZ��g�`�i�8��%B�����.,�uZ�����nr�R@R&0j9�d-+7��&��W�(��b�b�Z���*����K`|U�\��cya��+Dr�me������t��?��K��l�������W����%w�1��~p��.��8�����,�]|�=��'����������?�?��s���.�t�y����LZ�I�Nn�?r�%5�oN*E�D�'�wi$�������t#�eZa!�0��q��S��N���+4e�
���m	�5+]I-�G����,���r��D]��S��a�]���"��VH��)���|�6�.���-���2�s3�����
f�����D/j�.���a���g�.&�.NN�N����==aC'�������vjCP�/�i��!~y`�
���(����G��[>k�������L�+��R��������vW!�T^af��0���2����a,^�`�yg�-�on��R)��]�f�4_����VGkT��5V���!��e�RGN����X5��Y�Q1=1���Qm������h=[tGK�@��+*���8K=F0|::z����Y���qQ8���������{����HV����b���q�1������x�%�V1Y~�GL}8�GP/M=bS��F�lTt|�re��}����*P\<
����1��x6�uC��
p��o��z]>����,Lc��|��T^�����A`a�����������s��9)�����s��$�kMY�_�j{���~���u��J��(�T��KS6	Kr�Q��������9j�����a�QS���L�	M[�����}p���������+�����k��[�6o������llmT�[���Sv���,��D��]O���Hc�'T�]���#�Kb��-��F��j��O����������_9���=��MWP����U���L|vr��D���;
�
�	A	���^S���"���o�5�_�E2�t1pi]^�$������,��8�Ig���V����������|Q���K�&���$00=��f���]��*u�(
b9'�#)�M}�#��n��F�l���~[}kDUK�����&����v��������_������[�����~�a�!�1�3�<_u�g?��)���2rU�g��X%�N4�9�Xa�����H�j���`Z3�]��)��9=�X9�_F�w��Ry�/]x#a��=ra�G�aEWd��8�;DLv�g���l��Yj:���?�O!O���d�
2kU�������|���o��u�����P�I���I�3���	��(lhQ3�4���n�Rh:��g`2@�p���j��b*F!@m\D?��sI���y#L�������5\u,+��J�����pQ���SKAYI����85C(�\�,��K.�����u��?����f����M*q
� �tD�jK�`&�M���S�f�tm|�P�O
�����`s��������s�j,lT��}z�mNm��� �R�*��rHc8����hJ%Rhfr�8��H$�����c�7',���j�!�������������3]�)u 8������*E����+r@�����o���2�����������dP
�n���)`=)��|����}k�\�c��oU�r�-L��}�y����Ar	�\����3��9�;��y;gy���Z����p��_!�_�<����>|t�r��p��������[;g�?|��/�o<y��s|�p��[��k������L�?�������?u�0%�w��o��fZc������������pM��$*����|��[��������m�����;��z���c�D%}~�f�t�oD��8���,m9
�-Z(�
��'V1X�hX%��b/L|��$�����r���\T����1���O��&������(�HM�9E.�h��
$����r�)Xn
��9�S��}���_��1�}k9��1�v�N{ ��8�G�0��';��Nh_K����x��~�P[�P2������O����3x0d�J�XWk���
p�9`����h�������E�f<��;��G����/'�{g_�����sP�����{��0��o�8����W��+�/�zuk���xs����d�P�2�	�ed(��z5�%A��G��i��x�	vU��b�|���"'9P��4Tc1�����s2b!����qw9��������v�Q)������R���Q
��q$n�b)U��Z7h�6��j���O��[5���$SK����c,�Q�36�^���E�3� �D+��E
"��)f6��.����[#A������������h��%��pzC.w�����/���
X?\d��7��������ik���T+J&7��'*.a�n�Qm���A�_����o�j�K���y�I��mH�n�����ol���9.������5����IE�J����@LK����
Z�����n�������'������n��V��jU���w�ok
�=�
|�r��Y����j����@�����p[o���W��5�V�e���\b�V��e[�����B�C�������
e*g{x���y������L�'�!_��K2I����{���s��3�����h�MMayOJ����A
��X��1������b�G*�~��v�(��QQ���(-��������� �H�	4t���"�:zV�x�>���c�m��@��v=���iG�x�~4�|{���*��m@���������hy��������+7��Re-����xQ�j�r�=�b����+Di���V�t��^z�f!`��4�����.�[�2Yp��z$Q#-|F<���-��u���E�a�>hQWTg��D��8X��UT{���u-�o��7���O�v��J��@�b�p�O�m�����v��J����'�������jc�Y�2N����2c��{��;K�$k���Z����[:l-��oLWq��3�P�j+>k�{fNFG�q���c���%ezfO�I���Re9J�p�c�^��U�$r��M'y��3y`����a��J�Ky����$gbi�hm��9^���I�{�;�k�����y��{�����|j��hQ�9�ji�������3���M���K1q��;{��Cw���V���rm?��-���{�w�>F����|�j@h���-ax��Z.������m�1�~u��WiH�K����
;P�h�6��$4����������7�pNa�������GO?>�'����m��%���]%�*����r��4r*���(s�PW:�HM_9�8*�obm�KX�rm`n��?�G��w���Jy,����]��������v���W{��?��x���:�lm����������������~
P`�j
K,X�i�g�<�q}�����O�P/�.�����7N��w�O^<���\�d��{�����[�����&��O�W�����X]b����\=�}�0������H�T�;`�U���"`]����T� !F/��0c\������v�'�N^^�����|z��v���W[�_�%`DN��S��+P{r���vrv�~��������Ht�LFT��9$I�����,�|��~���9��}zu������5��
���g�_����O�U./m�*-�����+��������\	�U�O��3�<:�/X��h�`+�����}��������F�cV�p$l�T�]�����O�/���{�����W?]������M� S)���$����3
���&}Q�����%�����K�J5`Jp��I��[�~����@<� ����'�����Ww������V��������<���qb����Fs��sQ��Rs<�����z��������&On��=�;'���#�����w?��=99����G�n��{��q�|J5��2.Q��6%��z������B�MWm�I�{���yT��9�l�e\��������:������[�����pF6�Y�j������m6����0��C�u��YBN�
�D��{X�P�/w�������!�
��Sf�TQWs!c����>��}O/v�x�"RD�vqF���X�Y���I�#6��O��U���;�������/�n����k����{�n=~r!��O�6)�8��V^���������q#F�`�=����VBf��������3�|����#���[/�n�y�����{o��<�s��Q[���ej����a�o�w��:9��������P/�T9��>l�����B{|������z6DB�CS���6���������e��>V�*�S�P$����������j@T��N�hk7�1�;G;� 5I]��u|���gW��&�N��l�x����W[w7-�����O/_���s�	�u�u��<l����;$6N��v�'�aR`c�����%�^;x��<?����n����wO�m��
�����5�h�^r��}�����1����������|����f��!O0�
%f�����V������[����>=���6������no>�w����k����Z�"�����l%q M2�n�"qL����-z�.�1h��%��!�����Q���;I)#��;q�����[�^�j^?���M�A���)7���I�{�w��{�Yx��d|)\�fP��R�N�n����a���#�RWt
Fsac��u'Z��t��Z<>�O�o����>��������'���F��.�p��B����tZP����`5A{���=�2-�h�������AgG�p�������W?����!b
���4n;�K:k(��Y�+|)(*6�$��*��<�w�<k��Q����|t}�?t[<����Gw��)���Hq��4}��k(����dw��XH�P[w���C)t�z�	�J���>�lq�����z���{�����6������[���kw_��W�`����9l�d�m9����m����r[��i����>��������9{r�����u�T����Qm��d��e`����wP�pf>(R���4���5��z���S~z���{/��4�� D�x�GP-hK\�j9j;0��g;��|��;�7�cN�>�q��/��uR.{�b���O;������w�76��rz�����w{���z�e[J�J���N������8l��sIvqT������Cx�q�wb6�+�i]+R��>����`%T��`n�<�y*2���9+a4�����}���q��{���&���d����;3��R��%]a���S���7��7S�_�#T��EQsv���O����7oj~�|�$T3wJ��;������IG.�����N?�;�k�Z�JHa0)t��, v���T����r�����-��kr��^�z�����?��}�����_{��I^RWK�v��;��������
��(�������0�����R������F�'�?��r&��/w��W�m��e�fmH��Bx��`q�I>lm�D�
�M�0`-�'?)�����\i��<}x�B|�;����xq��d���!J���)�����N2����fay���q:����^��~�h��	�@0GQ�$L�b��]���,�s���~����[�>|<yeO?�'����7g���2;��M�v�n��
�&@x
�$�^Z�m�%�j��J���C����Q(�(NW�)RZ�?�l��G����szrt��������p���/����h�r�u��'Y6�+�x���7����7�{���|2�
��	�6��l"����x�������o���G;Wo���ys��������Om�)�O���A�4���~�����R2H�R,/K��e]cA�
��?{W��H����~EN�D���I�RM�D���fc��L�1�j��I��N\�[��c���-�y�M�t"B�sQ�H$��R:S�,&Ev+W��Z�N��i���7|#_��3��MyY�����^���,<�1WE|f���
f?��w�6N=o�����.���q,���U�B>���P�02���=����nb���Co���w�cf����|��=�9��@���v�^��}�c�-<�
��9i3�

�q��QweG��r~Z@���-E���8�[�����[�������/��C��De��v~�%j?Hs���6n!v�w�]R��b�`�X����L�X�XG���Em{���P�9�c
CSe"�"*Ia��L�k�}�d4_�F�R���|�H�<���8*�U�7ERv�.S}�J�7�����v'�S�#Y�I�:[F���E/Iq�o�1E�
G�7��D���i�O�K�{����g�G'*�A��"���v�V�dW��D!�����PQ
�kW)�5�Q4f�+���&�)gV�d�������S�:OZO�Z�)��T���������GJ�d��(�R��k��?B�<7���d�e��@\�#k�.���b�J�Q��=���)Rl
��ia0���Mi�}�������*7�uVO�xS��Be9T��Y�s|;�*Rj+�����(P���W����U!�?�������<����)M���Z8K��e1�
���/\����[�������]�h���D���������Ee�ai��[W]���c���ho�CS5�Ah�#�6o��3<�n��3���'���8P�I��� �y�y/�j�6�L���h�i]dv�g�|>�T���W����my�q�����'�}3
��~�a����V�`S	S/��g
��sxZl8)���;$����)��!��&��u��vy;���J����%��J��LE�PH'�p�%����8$sf�8dz>���iP����b�\^�f?��MU��l�����:;���I�*��-�X�����T�����d1FX���IcB2H���=`���n�2����fpU��������V �t�h�ge��Hs�te�j!cu,L���HS�!Nn`qL�c������Ud����z%P�n�������dM���M�X�_{�V,\���D��vc^7������w��eZ��ou�(��T�+��Q_��Qk��e�Be<�(�8<`�$q�;�|��d�b;i�Xs1�O��M�(o�U�n0����x��^���h\�UB���������y����;:�MG�)L��ER�z��~?����}%_x��&�{=�Hi�eYf)��&f�7Cgt
d:��)c9uv�D����N�I�a��OQJ�������,-qx��6�5��Q�P������=�H�z��\�e�k��eH�HY2���"����:�>�5��#�+��"�2���D�0V�a��������:R��.;�e�����:�C;����<^��GJ-�LZ:��0�'���!����(��S�<fZ1#�1B�R�,�V��E����O�}�3/�f�y�MUu �
������=	�6��V���������j�4p ��x�$]i���Rjz��O���������Q�P������n"�q�F���t�:pw ���QO����ap)��v~�Xw�����[�\M�E�Bq������f��6*�/3�]2�f!�Y��4�q�q�F��c^�������4!�g���5=m���_����{\l�B�6��^.�������������+�2$�nN������.:����F��7��
F���'#a�z~�����X���W�����Cy��~��Z�M'�3j#Rg���M��I]",���?R�u6�0K��0�_��Zy��W��������fe2):�����7]���Y +CxY��M^�ok��qu�[��X(@sJk��i���d����W����7��;H,����h��|�Q�H��	S�
}�I�9\����������.��r�K.K8�&}"?��,��@��Qs:^�f��k�S�����s�hT����3Fw�o\�'�u�A����f��O�p��0&3�v������lJVh�.o�1n,��eF��Jw����z�����hPn���[���'P���LK�B�I�%	�ld�t�i��j�����V����wt<�����P��e>���S_���q�\Rm,f�C�>
+v��/8�Wi��6w���v�t�pa�]n0d��y�O�q����q��u2����f3l�*���G*�Z�B

�|��!>�����~�
�0'8}��{JK��/�=-KM6����-I�3\
�X�rK���� ��x��J]�.�Fjf`�b$���?���(~���[�E��[�5#:$+Q�#+��z]����q��{��Y5g�q��0p
.�Y"uA�-�lL�H�����-V��h�3�����
	�g�2M��������}fz�����<gw+�p]�0�2����wB�6!�>B������7�~�Lf�q8xD�J�A|!�_%���������u^�s2��Z/����r��+T�~�	9WP�,&X�G+�s���9CO��K���"��jP�������j��������'k��>.������]=�Wk;�1���f�H�<��Q���YiF�OF55y,6F�CB���GD���R�T�/�����F������m
���Ep����T�*U/����H��f����l�%�"���WK��S�0��o����x��9�u^*v����y�+�a�����T�:K�P�]��n���u2 ����O��p�p�(����h��.�\Dg�c0�-�&���f����C�K���4��
v=�n�{�"��q/R7 e�s���y����p��NC�%�O������VZ�i�fqt;��}��+C��s�<^?�P+6k��j�]V���9�8DK�.��1x7?�����bR���I������Z��v7�������}��8_x@�Y����z�px;���g����GP1��25=&���*����R�Q�/�N5�X$�����eF��TU9�*	Y�Z)���Wu}�4�����
x��w�I����GrJT����C}GGO������-�F�[�������?�$I��7���C��N��'�s�m\kR�	��("(��	L;�_}i��k��}��^��_c~�;U�wh#iJ���J.�A2Sp��}�@�����q�l�f��w�_^��������_?�z6�&/��Z��*��@b����#m���q�8Z�PpN3p�X�0f(��n�@��(���VdJ,�np����}��z0�W�L�>����������,���(�
����@&��6mI*?/\BA�� ,����4��I��C�:����!_h��/���������?R��t���)�����[!DE��^ ��6Y����>X�������oZ�WT���[��o,n�W+>]/��!�H�C2Un��?���~�Y���CG&���G����S0�r>���O3��V��z�d����C���2x�:����Z��sA�Op��/op�������L_��sp����XZ}�^���s3����5��s���U���E-����)z��B�,�DE"8�s<2����s�7�O��h��poXcL��3Z�$���Sl6"��[����*S�]N^��OU�F�s��B��S�iw���-�v��z
\��(M8�E�����4T*dV|�F�����9o��b�c�w����(�6a��H�b�L�I� HZ[(�s�.�����PMO����#I��Y�������GSq��|�~�����<����}m���u�����9�4��:�@�
��7?�I���i%�]*�	���
z6��B�9�_v���~lU�Z��]s��D�����T���{7��\&�@�A,�{����QV��������B��C�� �	���x���f�;�\P�UdW����j�Q���Rc�5��/��M��/_�����_�2+�H>�r=�~���8s��+�^+�_����0VWk������
�S�����7���o�����{>9/������?9�[W��Q+]$l�~���~3�_b|1J���kn�+HP��izY���i�A�8Q��O���m�m���z;�=�c�ikb�G�������o���^���~�!��$���������
��x]�6{&��i:R��6N�E��I�Si������z�\��Ru=�u[D����4�O4��"�4�P�v?����ca�[�60A~�[�9&g�L����
���j��vPS�!5�R\��Jg�
s�!��NXz��"��1������E�Y�(����?�������V"v�[R�MoN���==���a��[�/�B���Y>��O�����DHF��"-lR�|~"�O��DNA.�s�S���#��fD�f����[}B��v��L�� (�D�� :���MJ���\��n���Q����rD��%g�������3�h��.,�sHy{��?%��$f��
E0�&����$�'8���OI�'��"�L�\F��v2�D'�B�z�#���k=&}�yRs�{=���~��ir���6{���e2DVJ�Xg�TK8�C��>W�y=����>~�
w'{����cjbc1����x��?�K��_L���nS�V�('�!�q����Xbh���(xJ�d(�MZ�N?�e��
��Z���v���tI}"��t�:�����6P����������8)}���c�����;��4v>C%��P���J8�s�~g�H?h���x�+7����7����!�j�����# ��w�=����~�U�[|��*P�jj���S����r����D�?Qc�X3���H�Or�?�Or�s�������n�����~<�B���O�	(��Q��W~%J`�dn$-� D~��;�/+����(��hR1�����34Uw�[�r�5.�������y.�Koq�5+M	z�O*�9C�|V��"V��@]��"X!3$��?CpX Wp��|�w��~�����)ud�����}���oa����x'������SB�r�	�(��J=
��t���u�����eR"�f�_��������}2��?��q\�gH;�o)�{)�=�WC�0.G��u����S��K��`�~HJGY%����5CL�%[���B&��r%d�H����}>� �{�i����Py"�FjX���q�d�YB��n�����f�|�����r�������@�$X^U?y�n8oL=2�������-��|���S>1K����N%��h3�.��-�u��Q|,��a6������|�Q71`?���8������e���Ur�*^�J*W}�XU�����q���Xi�p�U
-��d.c=�h$����)U��H���������L��~B�f�����`��"�K��P��By��?�J��b�/���E^���w~8O��!M��H�v^�������y�����=��L���������
+�iK�����i��o�Oc��b\�������o��"�����T���/����xEw���@���P��x�5�������}��x�%s���B�j�@����#u����/C����w����G�-"4��������Dz�C&�H�cB�E�,��5�������?T�C���b5z�Uy��~1������Z���t��q8�S��? ���~9�'��y���������Q%I����)��X[�3���/����$@@��Y����BR
-l��|��q��q���*Awn�Q�Kx�����������C�}�X"?������-�|5?_����chXyQ��Q����='1C"�*���*�C�Q����3Wx;�����K��'����/�����7Oq�MBCS��5���a���y�/�[�J���1�Tq���u�#[%v�b�{��^��r6.r�
��Y��@�n.re�2.r�{r����Z�nkx������F>�1��s3d�6�BS���&�n�*�&�Pd�P6���U��px\wO�����B�5+����X�G���X��&�
�$XzV>����V�f9
*�5E*��_ 	�e����Z�A��dS[	�C�ni525#$���	�ism��)���$08��t�g/zF"�h)��a��e.HD��H�D��+JI�U8�	g�s�]��X$0�	����� )V ���lo;�������#����Z�#j>C��=�H�3���1D������t�ZLe���A�F��[��K����S�~4�Cn	f�S����(l><5��c���zD�S��"IJ�af��W�%���R��J&A��-�w,�����g�������P����
[��D�����h��0��x8��!+���TF��)��@��.�R��I��G�������_��\_^�� |������1����x7Xx~;8�� hY!V�e���Z���`����\J�6��4.Y��-�G��)�s�0�� �����n3}�#��@�����`
{S%J8��I��,g1�L�aV��K�U�1�]G�p�c�He�R���<��dZ���n&���-���������B�+��m��Z�dL���8����=%�bZP;�ttUYT�M�7MyZW��(j��\	^og���Z�x)��&drl�]���%i*	�o��n�hFX,��FQ�p�+Rh��P��4�K�a:=�8��*����b���@=W"�V�5��y�Y���r���'V`fk������[�,�j�2��jvn%IA�!��$!����7:�� �\��\�Va��h��PpY��W�
����W P�$b�J>�Zh����ik�B+w��WE�y���|�)��%�l#$	5� ���T>x���	����	��^J����@-Q�7���h8�L�j1��,��S��{'Q�G��Cct��������#�t��.2�V|��o0��$C�bF��f���S����EVv`s�������M4��Q�[��Y�X�r���>4X%-��K�UU�R$�`..��C� ���� %pp�n>������6p%���U�@�U��!�����
�h����I��,���Q�~D�u�/f��u�Xz
��O����?^��]���[���������ZG'{��\^�]]_�'�~v~�wU���]�^��Lg����;�b�N���/�Lm �h�
�Gc���]��?H��[Fv�����(�E���`	&���!J��'�Z]E�N9[�`/Q�j�i��x�%*Xwp&"��(�5N�(�E�.^�K*���*��fq���
�6�yQ�`�9{�=}H�?R(�UB���qe�C��F?
:��n��#C~��|��_�al�K��E-�P�}5�<���'�C��+�Hh�F��A�Vd��#i|fE�ra��{��1���?��]�
���
����S��*���P������i�^���d�RAVl�E��#����~4Y��g1�1VJ��X�y�t6���~�����x-x+!w�;$����C%�Y�rC!-����'���q���)�@��������h���40N��0s4Y�#*� z+ ��@/]���BW��zT����Dh�C1���.*�>�wKh5n�H���\�4��/N@�'�?��@���<�U��~�����F�#��~4*etv���?FAFmM��"0���A_]�Z�
k����'}u�k��1�����=��t�GF3j�J���D��Q�T���&�����"q�zu�G�R�~S�R��c���G���4]�����ZW������������7,j�	�1���.�~VZ]+�A��ku����N���d���U�Xs���=+b?�l1l4*e��.��	e��X�X�GB�h�RF��mr6RLH��b�MN�����s��]���kB�9�e�#Y����}�������l������Fw]�c!o����Bw���i�H�q��R7u�q�'��}���~{����������o�5l4��/���g��y�GGg����~����������,���_��i����)��_������<�w��lqE5��`ivxL�/����{�Z%Xr�d�YQ�,���=�U��J�e�$���)����(Q�@�TK��7��&�y&�)��3(��
*5��E���1���+�(�V�#��!r��pRD%�b���VP-3ta���Q��U�kk�M��!V+����ElU�G!�V�O(K��b�7C,e6�|RJ3@�q���}��_P���b�p�H.3�#0������p?nD��Q�jt��Xt9���JPk�:���I��)�V��������'����|���jy>�z?��r�:�6?�E���Qq<|���:��I���(�H�&!�E�����+&$K[7=��U?I���Y��R���Ue�N��DQD����@������r+��J�qz���Y;�VT-���`g"'�A"oR^k���S�����-��RpLXaebm�,�u`��
#<<��$������=b��f.W��R������9Iw���iE��G�=a���P������Bw{��=�����O���&]�����1}�qZ�
���i��l�{�%�t�|��d������$�n"�����C�����L8d$�`������|�T��@�kQ��E~�9RT�q�>�E��"3�1��Ir0����1	�o����F���V�ZG��P��Z��
��
6�v�����)J6���qI�B�D�4���i����Ya��W5��$�"	O�&Ns����Z���������� �JR\=����c�V?nB���K��v)��
�&�DWo|����;��Z���y�����;j���Q)
�i�A
7vl:����\/��'QIe�V��2c���z��i*���^�L���s6|bva
��\L?M����m-Y�����&[�/���9�p�Y��G�T���s�}����T�@�Q���3A�����������F�;s�qz5�;7V�t,��������,;-��9�B	�h�~Hxl�X|����c�?]+�{/7'om��� 8jr������?����������������������?&�����`8�|
?�)�ss�>����j���E9Cu'p��Har�~*�tA:4�k&�b���U�q�tP���XN�l�#^�L
S*q)EL�M"i�IEC���jk=A�
�G1t�m��� �
��Z��S�{�o<�W�������[�|�[��X��(�v(�a�`���h��V�}�F&����g8��������8vs�����&DP���Z�����������5]�$��w+v}o�fN���RR42�`��b���Zo��*�kT�s7��^ea���" @pp�$�����mcw��3S�8������-<�'���C������I6�O������%=Y�dM2�����jO��tP��o�j[	����
B��C"�;hu��h�y
�
Y�%�����,a������	�i�C�����g?�%�mB����M��6�����G���������R��?S\{������M�����[Jo�;%n��DJ�4�Pp�$}���z�ju�&V&\�����(I9s
������r��w��9���iK=)�`��&y�������������������n�S�U�A1�(����&���'�4}�������E��gX�/�d!��n_��
��|�`<��}���tR���;�w�W��-�������t������w����{�%�[q����WVh�J�W�e<�==���ov����ac]mx����3� �����r���W�9��3X�&m�sh�z�+��D�7����� ���uq������9z����V�,��~�/��51�q��MV��V��La�)I
�y�\�����W��Db�z�;]'���$�=�(7����0c�k�����E�1�1I�b]c+\zvZaT����;�2���tW�.�W�\�*����e��5&�[Yy�i�����~J�4����zSR�TUi-|�>��,1~Eg$�����=<�!���{kr:�8�zv�C�W���u�M�H����9JE���QzP��JuR���f��A
&�*cXB��Jj)�m�J�DBMUq�(o>�$UU�b��e�0��*QPU]�-��&p�H�!PbvP��J����v��W��Oab����Y�:N:���Q��k"�G#5*��-���XV�5/��)�hM���i�M4xZ1�!	�|��h�0���������'b5��n^2����v�����	�;�P��J����Vh�/�Z�x�0p���<���N\�b:�y�N��&��}]���u�i�0sA���g�[*y�Mr���	�a����d��f9�5A����=4��^��_������2��k�#���	�t�o�b��mh�7�� Q��� �-WW������T,��jj�u��l`v3�E�4���z�V����F`�7��C�Z�3l�.�7����AV��i���r�yG0��:Q�\����)6N���]O��� ��k��$kl 7��Uz�(.x���*���q}������ [���*V�K�]�T�G �`M��"j���oGG�R�M�)����,����B�Q_��N����c`"-�U����&�gQ:�;��P0�:p����_L;FN#�U�\~�#i���Pj{��bA��Q�%\�l��Pe���*���t��6�"�8��+Vw,�r�Z&�k.�
WK<DG�46uc`A`��aY�Q��������N�FR��rCM�_�>��y&���xXT����x�5�d@���D���3��F��-1��I;���6#��HPg����������� ���_������������R�����?7X����`�ODh�����g��B�?=K~���V��({��g����?����.��dm��o����?#%�����?�I�������'��������)'����+>9F(K�*�=��<I�g ��D�m�=�������lz�\��h/�^L//��Y	&N�dp�R���/�����Nm1At9Bi��e&=��#���|�����y5?_���{�R0!����K�Gg�gt�M="�p�oXaJ5/�:����2��������~�Igwa������\��:k�z�o�� �v=���eK�9e�ot���
��RfD�/laa8x���ms��Z���/m�Q��\��������*����}R��yw�;��JH-��0��l�]�z��7�<�R�{����g��7v�5l�aj�~����-A�*�v���d�$�4�2C!��A�������/�o���������������k=���#�Wz���������X�XS�Z&3��A�Q.��`<d��~��k#���e4q�U5�k��k�����*4Z�Q�N�c���Dkx����C�����v�^1?<Y!tT��<�[�;��������Vj�GH�K�>��N>^}�G�������"��T��G�=@�������=���.dC\:#
}���2Kc�@�"I,/ ��W���	Z�g����v��M����h��7���+c#����!Zkc�2�U.�<��$�U�a���&�%�zV0��z�p�?�*a���CB���niB9a�&Y+�����csE&�����������W�����]���9o�)e���P�����z[h8���O�^��>8x�}�C�������c&�Q>����,^��1�v�yz���/�;M�<���m�ol��Pj�����$�X�f����N����[v���dy��	���+�8%$�f)O�G0���J��k*�f�X�p���K�0��+��k���U,m3'�)PQ��	��������UZ:����b���;8i���k�3kY������HV���s���MGzJ��%�����������i�$Jc�C2
{.��Q�fQQ->��H�p�}BX���P�"�lb�D�TZ�L�=�����Q�g�
�
T��c����;�/�eSZtj��N�v��kl�A�'��Rn��/��+��w�8�~zQ�}d��Q�L*�lAp���\K�`&���)O�[��b]�$��.���1��n�������o��3�������nD^��2���V&Cx����LIJlb�E�L�D�i���_�W��q��c@�����m�>37��#Qp�A�/d�F�Y�"%��bm����"��L�#��Yp��j}J���Ja��?�'�~�������G��\mj9m�$�i#BJ������JIm~\F�Z�=��*�<��UCb���:�?|��"���]I�=?y7�z2}p�l���������_������;����1��;i��]a�_G����3F]����g��M����m���p��c�/~��Z��B\�I�y"e�pR��7���i1O>{l������A����w~�:��H��k1��t�#k��uzg��8x�v���u�
-���K+�+�p��Jfu
��E�G�*E�7��)��Z��o��C�t�J��R���+��_����B0h�����.,%~T#B����d�� !��,V�DDk�H���6�y�!
��Ev#�2���)��#&���VH�����{�_�B��J���_R�r��U��tpi0=7��I�h�hca�p�����X]0�e}�J+��17)h~4�/��(Y������r�X�9iIZ�d����7LK��	�����j�����n���Ea��( X�<������O�[4���STD.���a&LMZ��[��n���b%>|5�H���*~e-]X�p�3&,�6�f�Y0A/���w�����!�����~����]|��Y�0%��,�����1D#��w�
e�p��g,Cj��
"�)%( �A:s����w�i}g�x.�Yh
�a�Q2���D�Z��&������>���	�Mi���h����Q����������A%�EC�QD��x�� �<$���B����-���
l���PX�J��T�pT�f����m��D������T�b1aL���MK����
�
���Q)}{jY�����R,=���cST���:��R�
em��LIU��d4��'��%�'1�X��ep$[8�����[?V�u�������$$b����\\��1��qn�rZ��f��`7��T��ss��f��X/��������[d)F�[? �9�	!|yu�2���5O�~`{��f	��6��9*j���e���K�<���<��D���mj*-4��8=:������;;��*A�\��������u�?y1�B��]$�Gg���/O��K�G��X$����Rx�XE��������2�h�A���d�A�������F<m��S�]�]��@'	T���H@����I�EJ��e}�;��k�c�|��^r�f��
��ZmET���)�J7���T��,�(�U]G��kBv�j!���F=��iT�c���Yj���EQ�A��2E�X�&'�|E�Fg���5k�
�R�����[���*��R���2��CJ�5,�@][�L1Y��YhUwb����%��gY�'_M�����rJ���A?Y��s-�VB��3�����1b�y%�]�-��-E�8��������!��b�)f���@�]5]*S���JW-p)��b�a�t!�������*1�l8}�^�	���.�h ��X�1����~t��Unu�a��xB�LNY���j�Pr3���*L7��2�>�+%�p�
�AO�O�e��i4a�N�`�����d��fh������
�i����7���B$����U���<�RX��${-;1%������������w�t6�T����z��6����T>�f1���7o��fcIg9�T	2E*:�\�H�ld
'���0����hEZ�0�`�1"�$�c*�K�H��Z����$�u�
��4�n�����������_"��qKgWdUoP����A/���cD���

K����9�s��1xbL���(���V�:b���J�����������g�*�/�A�_�����_�p���o���7w�6"��O`y6<�iJ�4\%JY��Hx������������(�r)|�O������N��]��8?+�o����8���'��Vj�E���D1�(�Na�m[����(�N���+t�~�����-�Myg��������Jg��Fb��Rr*����������K��w��.���&�s���GK��X�T�G��o�`y	��V�\�(�'�R[QQD���H#��K��&:�IB���9��j�F�y��jxY��Pd��zv����I���xq9��!��.9����b_�i"M�LL%U.3�H�u����uW]����c�����e[�fbM��M2����\��fE9��:+}(�A�/���N����>rP�����x�����tC5��ebM<�K���s�!!���"�[9����r�"!h)��#�/���V.P���Ify��;��ILu�K���q���8B�$Fk�o����"�r��z�Pa"���PH�C��~���7��j_!�)�+��\�Ek�0�.��J'4�UZ�U3o�pv�������6&g��-9	�~n�f�5ej��BA�Me�W*D�o� <$4h&G)�%	c~�,��m�4|���|��.6�z��5���a�
3ji2�����pIG�k�U�:]u"�~WT�*0V�������Y���������<a����Rv�����P�V"�s	%b����|��������;[/m���l�{��t5�t4_,c�&����p������T�����2���J�ds�~N����~�j��4��7����B�MN��9�~�pL�Z����T�1�����m��K���yF��$��\�d�:�a.���r���s����}�L>������������dq[��mB��z��_���U��/Xz�io��f�-$,Z��aw�(����j���t�$��0&	��m�e�7&?s�^��*�t���7�7�[1�Nb�
�3�m���;�6�|f%X�,I�S��@.�����R��z5%��z>xz��G��+Y0G�&�3���y�
o��c*�{JQ� ��Q�U� |�VD&<����+����pX��$��/9����2p[PM�T�f&6��
�f��Cu��B&���
�u��������b`<������������"9/�fZ0��2���<�_4�0�d�����	�������'��d��2G{�:a��	PB(�����Z�D�4$k�������1��#7C�%2+"%��H&S�C�
���B�0i�[nX�zGp�S	��)�C���,�2H�[=\���(~�&�USnyv]����c��*��*��8��:����\�����Z�l�go_�S���HD4~��������l��4�[2�BC�w+��Qr��u%J�������F�M�koI��B���,��Q��4�|8�K(��lI��B���-Ru�Q�bl=�>���d7\;0�Z�����}Hd�bc(�����.m��D�=�v��Sc�j@8�Y���KRlt�"l,g�t|rO>�_-.1���
�`���J���Vq�>����4��+��ng���@|��M�i���X>&�Q|>H���&T[�GVE~�B!�Qt�
�^J`�;%��^�i�i�f�'�w]������������Gr�a�hB���PWQE#�W�!���!��
2������������~�n�D�]����<cP�<AY,^^�����&%-�������7ei��W����*�=8���;)�v���rz��j����,�&}E��W~4�!��Q��z9���]�#��B�c��G�u������R�ctD=�G+`��&aL�8�����:����Dq���2�8�=l����MU6b��=��\vu�b��+�=���Sf�J�s)��|4j�1����#M�(P.�{�)��Q������V�C�X�&�����_��g^�N.�G�n����F�"�g����+�^?Z[��a?}u�#R����������wF;����R��W��W3��Ok�o��WR*���:K�M'�g��&Vdg+�L?�q�?�Y�'��x�F<'����!�����8WB�P���oo�d�
����
�q��U���Z���F����S��������p�������������C��~�����j~d9J�O����[�6r��U��)i:��R�~4���d4�t��qC���.���;b$�~�G�a�9�]`+���2^���'j��"��=e��RH_I��Y�'~��3J�b���k{��������z�sS	�D���6p�B�0�
(^et��/�;["Bl��
����������P���"�FF��)Q�$����m�]���q���w�#��3<��vm�`�%|&[a(�?A��+�~�k��#���%�o6�o��M�H���	����'��M�N�u����������
�7�� ������*�j�r�	�oX�c+�w��(�1��o#h=�G\n@�Y\X�)X	J�R�w�����U�7�mW�^a�8����t,�0����3���&���|������#
�U4����X������:�3�Dt8x�,a�������@F���Q�~< f��/�`�A\�a���]~��DX���f/%1��D����y�d�����Cv(|�$�J8�%�������f��2"SF������}�NF?�n�no��+�p�����1y�;�|W�MA��j?�T=��MEC��A��le��(4��M��u$a�H\��H(4������T�e��E�9�P2	�!C�
�����E��"��E���f1%e �pf�b���j���A<��P�k8�R�q8�8kd��k����n�1��vp������g��E�����_\��M���Rq,���1C�4��^���7~��[P��^
0�E�����e'IEzfB�>p��;!0�8�+	��L��^I��
���{O��s�����,%��dG����L:��t/��.0�:~��n�N/�9���%�,A7�u�-�_*�H��L�'��mA������[Yb�_��)��pI���T���w�|3�
��S�X"L�`�Q��G���/���Y����$yW���k��<x��R����9�J������U���V�������R?O�h*C�F_�00>������SK2}�zL���N4"�r]�C��f`A��z�|��%MM����R)"m���5bRk������d�Q������
f8Tz�f O,�����=?�\����LA�X��h�G�����pl(�k� �$���&X�
���������3Q'f�&5�h?�>bSG�eQ��~���R�m���Y��\������(��Rk��B��4�H�-���pO��y!-R�t�����<�@�P�2�2Q�Ue)o��4��xk/��!��^M�O��
�'�����k�@bQ
�Gk��A�F�e��/�#%�8�����*�h�������DA$���	~��(`Md��*�PWX��(M��+��^����<��ih�E%U}��(Xu	^���IA!�A�
��Q�
�u��%�-�o����q����7W���0w��z���d��z�c%J��<J�qc�7����@�^k4=�Gs��;|�n��_�lh�"`>��\��#&V:w}��^���W�f�ox�5,p&�	�����ch���0�d�	��d�1��[kDv�+�$�� K�i��H-&%%�rn2�Z���i7�b��	=���T����U�z�*��&����-���->8"�.,-0kMf~����~,��h��EK��X���d4��M`�$Bp���c�i4
�C�(�	��hZ�@��q�G j�P�5�'.C��H|��_���gS)�h�A�S+h�Ek34�qr]�1��-6D���6A �Q�xdH�t$c������
��#e�� ���[�*��$��1I�Z�(�qH�E����@�XR�^V�E�q�SS�mt6j���$�h2gS���
�Z*B&\�pmC�����@�-�"����YE�����@����FT�����p���c��vi#�����u��MJ��u
lA���0�P�S�b���l���N�c��uU#�M�v��
J��;|X�auT���3~,��t8�������#-cE�������A����7���"Ib������%�@'J�L�H������(9�^���g��P5���#�V1�^n���C��z��$6��@����J��k,����(U��_�A��w�&3��FX:Ef�W���`��E�	���]�&���!���P��Z�+F�q@����VM��Y��km��#.fp*��������*���p����������Fp9���&�����*���c���#
W3Q���[�y�$�#��Ie�����qF3l��0}�������:������%������|��/�s���0K���p���f������d����P��>[�y(L!��r~����(�tU��4��B�E�0���h���Gz�T���6�w���!�%�����q�V�|����h1�EDb���/�7H��LN7ax)`1:#��o'���������jlk�n,�[�M��[4�0N�M^��.�b)I��O �i��������q������Q������B�]������D��u��I�Q���=����*`A2]��Nt�%F��G�bAF� v������7�6w�Z/_��_���+�t�����}����b�7��?�z�����
�N6��������>����G���%Z>�k�Nq�&fI�{Q���e�-A������3��G��1<DNu�Y�pc��l����{+�y�R���b��M@��SF�@��5B���1�L3�`�Fx���jk��_-#�>�Uk����2[e����m��zv���1�"�9�]R�,m�����A���1�N'���"uT���L1������)|��}��{�6��X\�r�{�wq5	j<yA%G�B�y$R��mx�L�$	�(�j�["��'r#�f}�=F{������- w~4�Wj4H�J�&�+���i5	��J�r��y���6ZK� ����sf%i��3��2������9���[c�
��*�H��F�����JL�sFM���2EEy��0�>��ksF\�0g�C�-3g84ezzs��e�����b���X����|TXM�j�!�b	���$V%L�u'�	a{�P?Z5i\�t� #���5HqozyDqF�Y�O����&=E�H�t��:!a�+&��R��j�
5�E�o�X����$�A�QhE��h��1�,�K�f�,T[)N���T��h��5"e�H���.&�"W���a,
������|�S�X~S.8*:m\������#Z��������J������5uz����Y�z�$�agZ
�1U��I��q;*^T)�U������JxY�9�'LYQ,W}F-�����[PM�MXg��^Y�p�x��tO�D[��Nh�0jR�CR����5'x�E�@m];`�5B��������?!�od��������c*��_�������gw��������s�&��@_�u|���(|]_h��G�;�v���O��l��>�R�(��������R%���:e�������4x�b��/>����>�fd������'�88�%���:�g��6K�K_��k��f��Y]�q��R��D���6c)��=c<��@�����%f�y�w����LMd�^�E��K�|^j�����L������U[9.��W�����+��� qS��C'����Q�������R�Q/4����#T�;g ��f<]L�DP=�R���Y�E����3�XS�y�vk��5�������u�8��=!��(���t����NS�2��� ����S�����Z������)�&oe^����_9~�����1������J�R��H6,����?��2[|�rM�����?U)��������s-C�o�����~����7���C�PV�l��?J�������$a��b�r����V�B��Gr�������Z�����4��r�
3A�*�s�����{��?b`$9�;�5��1�R4U��uC��y���m�F��
����fE���X�/��-�+[?���^���|yo��d*�n�����;�������qx�Gs�o^3���!��,��xH�@w�:
��0�PB]iu}���'��(�XeS�������_�9�U�V���]ou���a6�N��PGG^Y���,w���]���GvI���������8��Z�q9���,J=�LV"90�0fdO&RD��!��!xh��p����S�z���axp��_���:��|16��?�yP�X�9�+TB���S�Tc<���'�E��i���:�1��I�X�����J����B���Xmf��$\����VL
�@����2��,I$����U3��Ep|����x�8��9�=$�FN3�XMo 9�id�k�)�Jt����8�r�������Z��\N�]|���i�w
3��?�y[v����L~�����z{�������6�������
S���]�tM�5f�8[�zM�5F�ovv5=��K��.�� R�4�$�~�6�UV������3��%�,G���0��IT���s9.jf>��`E�������NG^c2Z�U�F3,\�t�hX(au����������{�`jUo���Y�Z�q	6{aJa��
�~���,�SU[���a���5T����JI�Y�~ui�4l|��i��g�&w��]��6yy��'����UM�I�����G��OLSD����',���>W��}oXA����`8d�Y>�����;#����Gs���������D����R�������X��ev*���&y�$h��Fk���"W5���U�0Z���xU#�'�~(^�xK!���Ij����������?T����7W.����������ww�)2����d�i:)J��:x*���:F�����Q�����P���y�s��>Yv�(?�EP6���Q!�2��s��hy1�9C(mD��m�B��D-���������	3Y!���3�?���0�lH�$`�C��}���(?j�5����4�{���1M+�G����Ee�������]xI����m��B�\�4}����['��N�j�/�Hb|��e�',���"'���F�L���T*���$���s�X���W}0_F�:=�*�V����@�dRa�Q���-���#�[Z��D������tS[��B������_������W:_����W���������U<��/w�C	�*-mMbB!?������C�#%�_M��~I�a3��������PbYT�c�����H+�n�g���������������?����~8=�_�����
��E��r��r�p���r�L�+OO_!<(i
-�n�A�,�,=�]���*��~�K���7�7����/�������������V4nh��h�����D�0�����u���8�0�.}l���C]���?k�H�Kf<���>�~��,;��;M����w~����k0%g��XA-��%�Rx�����/7����-����T�k]I[ O�2�����K0���Y<���G���u�X���hoXtb�J�SDh�y��IE�3�y��{G�/�}�������	����J��W�����^�&��*��3�p��wgW��:��t�EQ�1���212��Z���EWJ�����&�^�_�������>�S��>�D����t�N[��������E�U����������h.�uS���>\b� ��_�b�������q��h����A`�sSm`A�
�>:�a��5*�X���YD�����H,���$��RM��cw��3�ky���V�XC�����
�Ht�H1Y=���t�[�S�o�%�����{����B�*%�����f]�g�>�{�7�p���A��ve��H	��K.p����NdSB@<Mh`�n.�������,q/�o�`���UDc���������q�����6L���q��e�}/�����A�[9���A�dx������n�����������*C|e�?�E�p�m����/��Q?����J�6�P��'?��[��h��\�������l���n�������G�������(��n��V�hV�F��������������:��9%F��5�,<�x@�SP���4k�
H���z��I�x���	�N��K��Q�
S�v��s�����Z����9+�p��,$��������[i[��a��x�1g^_g�
�������a��&��3m���9�B����9J�j�E����9|��m���V�������IOqPN�;=��L`�����(5��w��DCMo1����1���6^	���a4
V����@�$6����L1l��|�q4Z���^#<a��
���]$�BX���c���;+�aO���?@�)\��p�����2i�2�ZT)'\@��{��ug��h�����?N����������j�e��,�,*����Ep�'&q�f�Nh�0j�������H��0���6�T/������g����?ni��
Lk��+eK�~�������������3�fC�����K2�UWi����w,����a�R�_����� �3�ng�:�f��0���g���TX?^�����������_�	��}Z�F����j�1�]t
���cp����g�y<!]����y��GNrGA�	�J�S���VbhI���?��������������	��������������@������1E��E��%���"{j:�����n�Z�����f���#���;#��uXzE�1g�2rd�RV��e�����6���k���Ze9�e,����j�\S��ip(��1�qe�(@R�4���>���vbX�.�E���f�&������A�`��eS]r��e��Z�t���=.8��C2�r	�3�������7���Jek���V/��qI��KY���H@q�V����*;�tZ�{�������q���
�R����!
�@��"�4���G���&$��k���o���i���ur����
641���RcM��D�����+�3.������F�f&sA������Yr��K��k9�G�����+�����6���������G<��(%:Q�5uSx-Cd�>?qQ���M����d5����6r�\�������F',���T
jW�uw=��PZ�_��bF��v7*10k *)��p#"K�V/�
��N�vR��%np8�snp�{���!1*)�����(-��P���G������27��n�>-���PZ%�p8��5�9Q���p�p��x��-�Xl{�Y��i�N����W]��c�H?�K
7}!U�#.� ��Nk#II_
�2���
����!��`J<�5X�*�p��d���M�td9O���7�M��Z�)Q���~&���
���#��HP
i�[V@J���I��	����jr�Am���@�g��x��f��/
���<&^>�l!�U���+�T�UT4e<Q24F�s�.�v'��Th_!	��D�����T�[�/��;L�5����4rP�-���z�hP�`����$�cV��4��#��������Nt�~'UE
K���o��9*}��G�},��5��B��Yd+�LlI�`�G�.�`
!q,mfY��8��a���Zz)����'����{�9�C��-6��,u��(�J#s;�`y�����{_�V�L�a�9���P��.Q�I�QK����_��}�
���lK�5cn��h�u��
�����S�R����(�S�r���U�'
'�(z�t��d��c�(���+$s.�Tq8�& 6Z������j������F���U��Dt91�W�)�I�)8pF#*R�g�k?WF�!n���*��D5`�U3�P����
���(����M��
:���~�O%M�^�|�����Z�90��~{�on������}���LJUI/AH��t%��j���P�HHU1/AH�u�B� ��^9G*�x	B�����&�2��]!U< ���lW5�B`���������`�T�������e����ImJ�)��`T��B���I[t��O��-C�z�1���l2!%v��,��a,����U"��`\�bD(bQ;�O��@�
y���#��P5%��pQ��i4HNC�08!Ucw8
U{P�a�]�$��
Y:;�s�hh0��m����\V?�=V�c�.���X^g���l�/:�pu9=�����z������9,�hjV��	�}��X�x�(J���1l����*���3�)0jUS�*��b���aP�]RS�L���F*�`�u6���+�zU��*3�����L��S(W�Ai�X�pU��9H��������"����}�2�|-��.��8�-��������,�r�p��P�L���A������?8�R��/"@��&��_d
Ji��An
�cx��}��3����wd�p0D�b�_��a��K�s�m�Qi�2,N\�r�����J~�*n�]�� E4K����_l� ���P��A�m��?H���=+��*"���b��CT��whVS�W�X������������Z�C������1���)�Y�sc��q����ceP���w��D�8�������j�/j]��#��#(R������d���{W�{'k����N�k��=�����r����������/�����������W
U���jmy�A���R��U��A���-��$,'�
����\T?�"��������jn����,H������G����Dv x��s��s����5��D����-�on1��(u2/�b��
�#9�l����6�?�~�+�9|������~Lh������l��7�$(���%�zT��(wQr�v�Y^@��2$����v�K�!�A����Q���� %��2����wv�m�t9���q0?��o��b
/H��]-�9lSSk�����x�2k
��
�A��o�LB�e��:������hP�ZH���`/f�!DQi�����4���cH���].h���`�!]�&!-���>MO��R!�����V/����������m��WNN:p��,*��E�(��!p��7��TN���bo�k��[����`{� ��:r�V('��D@I��3�
"(�~�B��R�O��1�N�:A?�k�ig��E�����p�&��%-�i�6��3��b$��$,$�'d����`N��@r �(�_�W\L����q�9��_���������j}�$F����x�
�n��Jv~j*h�U�>bjJG��������k�k^f�?�C�,}+��mlo�n����m�K�A�y�n�N�E9))*a�����	�9�����-:��g&_�()���f~�i��/^
�~D����t?�{�{y�s,�*��
BR��O���ub�AF��HJ���������Dq!��G��b�+�p��]�&Y��P��#�^7��w�4���r/������#S�v���E�
�XB�G�&��P=T{��&�QX���3��W9M�����X�W���$�e`W	�3^
Pf����s��8;x����{����iov�����(�>PK�@\#S����w���v.
���-�YCYX��3(��i�A�q��e����J1������y3KOgg�2�N��h�pt���f������C�+�Rs��#��r��V��d7�������#f�������vG��2�����F��%�l�-��H�&�?C�(������|�d-|G�������Q�����}�����<DiD���h�8b����6��_�����^l[6�2��e;������5��EV0&gC���=h���K����@���������7�Y|?*��e#�OF��,���k+8q��Z�$C���G�go���T�~9�]Gp����Eu��,k�'a�Z��.ea�L���^���XN5�b��]�|�����<qjr"���I�5x���H�l�o[�����#�r���s��/ejnO4G�D�������o{��#*%� W|�,��Kz.Xhi^��b���<����I����Q�|�8^�%��<�R-�W����+�<�
���{u��QT���s1!�L��h.��� h0��5�jE	� X�� �j��o���d
�vV��|��g��~%en��?�w���}d���R�k����[�;:�:���Z��?'/Sy��n��60���������:�1,�2�����#T7�=���7;�����W���=�u�H�����	��R�uE�:C��?'G3L�8�;�f�%�X�����������k�h��]&�[.��&����O�-���C�I����hdn����"q�!%������$�x;�U�Y�}��>���M���x�w�C�jO��|� ��U��|z���`~
�6��O����.'�S*H���D�%���k_�l���?y�j����:D�D�
Y~�����}������A0��w�����/>mm<|������?wo���
0P*W�Xt1�x
F4����4M.O��s�����}8��?��)Gs&"���O���+`�_c1"Ue����L3x�WY^~��|������������g�������-�;}+,#r�J�.������|����r~:u�'{�{���uI)���T)aD��!�THii�H�98];h���.���y����7C7�������l�����<I#���b�p���q9�4�|��M�<c�q�]��J|���A4����}9-l������|-D"ym-�%4�����o[O��X���Z��Z�>yk�^<}&7^�]���}Ab��k��� �R�u.4�����P*�{���;�HK	C$�*a�T��_��}���jk��gG6>c]���]n?m��7�����x9��r��=t��{������uC�:��u*?|��L�~���b���/��>��]�'�����/���'{����2H����)��������&�\��`,�����s�x��o>|����������<���������d�N5�]g\ �i�(�`H"�%�����A���|M����AJ#It�x����A6�2.;h�:�����/G�|��O����.���S�*��k����
���6���������B���JrBDo
'�^�bE���zA����oM�(���3�����r���������y�SS����3�e�Y�����El��_?=}�_�o����p����������=���V=|��2P�q��lR��s����?B@|�w4�2��F�nJE��
m��!��-��*�6y�To6�?�?�<��}{�������O����}xV���b�2����/������u~�����N����N���L��R5��y�������5=�����o��n!�!���,���G�����?�f�������V$�����/�
�w1M���`9�ed8�l�X��{;�s�!5vU6������������#_����\��m������|����~%�d6�P�T�5F%���kx����b~6;��`��������%��FFeG���fG_����{������~xuo�I�+Q��qn(C�iA��l/9A���zz~5Gs�`���I��2:+�Kp��T����'�e#��y ��T���v���W/~;?�����=�}r�����O�>�����:�%2z�x��;G�q�R��	��-x$N���w�E/���;�N����#�m�r���w&�RF��;q�b�����������)�}�T��m��:7������N��W�8/���_<\�eP�S�3t�~�������ZH�}(8�����hRS�}*k�b�_����?<����������'������\�u��,m]����S>5}�)YIj�K��w��7ET!�]-jP���>>z�v�v���G��AVr����o%�i
����������IZ�X�M)�"��J"D�����p�wy�/��8�z������x��������u��Hq��"{+���P0Y��S�~�'hNP�v
��S)t�j�IZ"
FP�`r|{����p&�=yt�����f�q�?>�_?����=����
l��0�a�dN�Zz�j�����4����e������A�/�A=����%s���|{���z�>|<}����,v����s(���6��\��D����A�����]TS6����#*��;z`��F�&!�GH-XK\�j9
b;0i�v�"�;x{m+V,�#����\����/t�:)
�5G��[��}5{��|����������|�0��j���][2
��D|%k�������8l�yI�d~q0|�����	o��N�]��jk%����e+��������H����n;]m��l?}�qo��y�����Um���X�����:7��R3��&�l������/�oJ$Kt��*y���C�KX���7w?o���<���M}�H�f.J���vWm��F.��2{)�O���LKg�_%�tV�Vy���c�p�����������{���Cr�w����������O^>{�l��a��^�� ���R�������9BP[Tin�"�2�t���r��)�$�v�p)��@�~|�������\J�����������L��v��@��D��O�`k9�m�St�?^���j�����m��R�����w������3��~~������*����%���u0q���d���[����� `�t���?M�����1�7��tHbkQvj%/�C���];Y�0�h�������[����?��f/>���������*���������v�nz����@�$����]vZRWm,B���t�n��~Q(�(DW�)-1����~���w[�g��/_�m�����������W��	
�
J^����
!DR���
i�`��jzQ�YR��LmBjv3���6�|{�ol]�}1��W�����}x2���`��#}wc���2��O��:8��i���w�����R����X�,�+��(�<Jq������E�������^����TQ����~�������G��>\�+�������j����_��&}>	�3m��5�F������*E�'�|gxqT|R�'�?�K�7	��rXQ���sa�2�'�����N��_�������������op��}�y�9�~/�F�u�1r���L'����C*k�C:��f�K�B&�vv3������(0�kQ�5�/��?\��{:{�j�����?�������O���vg�JL��{��)�s.��r+-G�68%�%�F�0�f3���p�t]YW�������~~���:����
f��b�+ E;%s2�t�1���������TF������@ONQ&0������'k�~}�����o���l��G
���h��r]�+��g2C18)���L����pR�o��0�B 02�:�n�B���]�9�o����kEE���/��w��O�����-���E+����B�Z����j���N�R���2�B6]�:����T��'�����T|�����\MN^\L#_	�����[W������S��1����������.Q��a5RzyBj������=9��.���x�����)��������O�����C��pJ�s�o��������'��Y������lR:-��4V%��&�����v�9�����yx������������3q�{�$)�k��`Z��]�^��	����|�-
����p���#�\��!�aA�VS�����|���&����^o��{S����o�/�7������=�t�����x�,^��=��yF���D!��:����<���~v��������]��X�t4��l}4���g�G�?O��~y�:��5�Y�����=MRM�(�Y�/FXx%��Ft���[�<;�Y�a�t��'O��������������|B�^����r������^\�o��
Y��W�q�h5�� ���H�%������I6:��jU�P�o���<��fO������x�gx+����`;�2DD��k��,C2De�9p��y>��9;u{���t!T�kA�����k!8#�C��L��������'������GW�������`S�l4a��eY���p��c%e��N�O�f������_;���qq�4�l03�������u��!k�\<z�t�����}�v�x���u�>g�o�/�;^M�?�}s�L�mO���o]�(r�T!F��m��6��e�@�����k�m"�9�������`��n�*����������d�Zm�zrt�e������{���=�}x����D�K8�-3>�-K�'s����e$%�%����9��D:3�]���C������Go�>?Y{�?7}%�6���&�����}IJ��J���a,g*�z7T�vNM����_�H�������Tw9_�\\�zv�����o;/�?����]��`�O����q$���?�ejiq�8��IZ,*�}d{��G�W,�E�a�4W�H��Z�Gkm�G?������N?����������w�����uo����H��W�(
�N9������d)q8'��nD$%O'������T�&[i}�������/����w���k��
o�B�������hK��H�F�h�	�%l� �|��A����t�=��Op���a�o�����g_>�@/��?��������r6��9��d�����g(�Hw^����6�F����;;.n��"w��'T�����j��j���kVw�P��>�$�~�|J�x������|����u����-m��o���������R�(�u�^���/�����B�������?��Bl�����S�v������7�%{�����:�_��8�F)d��+�{����hdp��u�n������#�R��������W��O��=�u������g���]|e�"�)����O�/����$M� �������+5��d�����1�g>~�y�����@��j�I0������>I�����Z"x���;����2�4s�Hn��l��F���UtD�t�~�P�3�|�h�����2������������QF3_� �0�+�*�\:G�>�U��MZ�W<�
'@nJ����78M���
J4�J����:==���l��y�������w�w�K>{j��x�6�;���[z���.�J�lkW4S=����`:a5�)����3e$a]�G�^o|����x����X8���v��up������/���F�����:�)�@-5�
\S-
yK���2p���vW|vM=��"�;o������0��=������8��P���s���8�]fYX�:�,�P���Tf����U{a��i,�n���aL��"��7�����������_�\]��N�����Wb8�U�g6�?$������\�.[5	&���7��.'y���/j���+���Sk�>���o�J��`���W�W2������,�E+1��pA1��"{cJ;Kx��������mcHg�H���'��~~��f���G�k��m��`������^?�v�*\%�%�4g�Z�_Ld�0y��;����.]1CJ�����>����|������$�V?�l���*K>�C�|���d��|H�,9�kj���}w�:�dw���Cn�[
���LT&����7�Os#�W[�+�_t����_m7�'�uY�0_���M5���'�d&m������3J������(��Q��s,����'��'y�i��btA9�x~�v���gU�t��
���_��l����3�S�*B����$M�p~�-��3�#�wS]R��H9{��bw2-$v������5��=��sf�Kj�6��6�����]�)�\L3$�hiTt�4��F����m��\!(�E�#'�g���n�6�jW/"8j�?w����m�=�����.�P�Ta�����f�g6;B��{;!q�7�-Ha�qS�*J��z
����G�����S2�z���n�}t��j�e[����]5�+J�vac��H���A�K��7K�G���QND=��]~�D3��i6�?������w���}��8�3���vn�w$�Hk�r��T���R�yf��_R�����4LC)�i�[��wM3���tZ@�x����l�����-;
�my�q�������LW�#�������^��bH<�s�-Q������B]r@������������_^��_���������������D��_.k�������8��S���J�9<�^�X~r�}�6�w��v��E�z�-��Ztu���d�b�T�F+S8SR{�_�x.����k0���)��$z��*158w�xQ��a���v\�z������$��
$[�!�\t�|{K���Tzq��.�qmI�����R\���A �������am@��6���n�v�N�Q�iaW�Q�A���U,��)���
��
aR&����Q��6�V�y{/����O/k�o��������������������~���n.[��j���c�u�j`:���6�9���23i��p�5
�'�>�����/��u�t���n��5���1���8����&�Q�����)O�ys����L�L�yJK�	�g���48&S�����^�Ok[�����h4���Ik��H�3w�cl�Y�"IA��r��V&�"n/�NG���d��OSvZX�O[7O���zk�v�^{R����a��<����#���n'���kb�s�3�:��	���JGp6O<�p:���^������r����@������2�[�xV��M��L��+��Q����
�����x
EZ��������\��Fi=->��7�A��������R������^��Z�����S�b;��*�I�l�X�|Ord4��E�jS�L��oy�cVX�+|��I^z���RQ>���Sko������XZ-���F�;����a��������c�d�p���%i�JxIf1 ������4)���Y>��9����������h�^v��	[��x�-��G�z�$i���|�d�(��e�Gv�����"O���#j��c�R��s�i��bz��v���v������l�� ���z=jm
:��'/2��K���(+��J	�C�H�E��p�T
���2j�4����������'���N���{6��=�2�rf�R��'�C�'WL���,��%VV�L8
eG15�t�C����NK$x��;q{w���Q���c�Yj��T�5���Y��*6�X��~�HC��O����������w�?����>�P�0%~����z.��|I��������������G����y��}��;�Vgb�i����)R��vx����vWK6IT�ez��?q����>�2�b������������3��{0��[��Dh��3-�m�p1>Zw�cx�F|�w�#>�D���'{G)��&�X��3���nd����w�1	��4r�m���(T���}�������zk�G��we;����u����QP��:��������1���gD]����X�vPYj��/f`��x��� ��D��Y�9������x�{���[&��=��S2Z��
�E|�����
EJc$�M�xw1�Z
�N�eo
(Z4dW�@��C��w�hAdf��V~����^{x��v�7��d8�`nA���8��|�#7�%��
���x�F_r^1�	]���,�{r���:��y�L^z���o���vH?�X~)������K��,���
���M�`3��#��qe��[�,��v�8�Y�P��Dk��a�9^��OF��"��&AV*�e�V�b@�!��6�{��8$}� �����h���
nd�a0�^��]�Aa��A���;�|�|u���NwP�mV����c��O�?���#�o����������Q8�����g�I��T���D��r��r�p�� ��~��"����+bd�8i��s�����]p�C�i��5b�W����{��q���_�E�?�����	-b��
i�%�����6Uu-��;"�@�`�r&������p<2��9SCH6Q�
��T(uK�k��������n������a��S���|���}���"��M�+������e��O��s�<:�t��1vy�K����bm��;10��g#8Vju�3Xn���m^
�f`���4�����PF�#�pe"S�	L�����E)W\�k���T�eI/�H8��i��E�����	�B��#2���R�������"�h")7���Q���i9\�I���������b�����>������]b='�K�N�r�3������F���l��d���a����L_���cnV���ihZl4�[�>���F�dAZC��I���h���y�3���LB})f�*%3>� ;\q�|�8���%�5;���_����3MYxq� ��6z�\0�"d"�����*���uA���~P�Y/��N�����Z�/�e� ��r��1B�\�7C�2%0�XK���l����J��Dp6��N*�$`Z�Z��!3�b�"��"s�����F��q��^Xh\+D5c�Y2];��1�kCtm��������w����������0L�YH�x!���?����	Nt�q�)!"���<���������>�,oR�.gf�k?�����q�aQ=��3e;nR��i"Y��lI�v�	�G>{Y���O���J�o�&r;���Du��*��J��_bJ�c��Z��Dii� �4O�!��Y�@�])�Q0��J w,-�Z�,��r�1�JQ��,O ���Z��c&
�y�-�@�y�$$����75����B�<�`	��*K�a>1�jaA��I�`X����jc�XJD�6s�(��������~���On�1�I��M��H0YG&`=�!�\�
5I�V"n�S&1����zA=�Z�����Kt�"�����7l�����C���o��>n
����7i��p	>�"�,���J��s���U9n��}O�`���`N��_t���������������{%%� �2��7������w�?���`	i�2�([��G�?1�{����?��Qf 3�c9�1��P>���Y23���b���>�
xH`�NH�s�k��������^����n�������X)\z0��B3L��*
_{'��[�)�JL
RM��2�a����1�c�:�K"��K�k��S�3e���%\Z�P�&�����&�������r�������`h��Q��r�+Z�W���>�X��>�&����\�JS��4���?�>����`�P������9p�v9�4��F3XplN�����8X�2�esy1������5��k������bZ��`��**�T����O����W��mo	]��G�g���������:��\������������C�`�@���f%���������
!d���7������meZ���k�2�'��X�Z�%���t�����*���U6��
��;T����&���b�ry��)��������d�nQ����%
Y�Eg�oH�3��~/�`�h�l�e����&Q`1rd�a���k������ke��:��%�B��u��*�r�a��<�:��p�l1W����RU��%P����H�u�l�rR�����U+�+X��?�D���:_(lVB\^�M`�8�L��S��z�PH>�!A3tF�J��B����
?-z��������7\�H
gK $���ER��GRJ��pfj�x�m����������A���[.�����]f�\g'�?�h�����p"����2�oX�!x���q���lrYW�T�Y�O�P���������d��Q��f�z%�����
/q����3��-q����1��@��-)-�="F�����X���G�]n��?��!V���O����C�������xv(�#�����7���������!�(;]���?�v��wS��q����H�\���k���"8���&�,�`90�R������k���������exU�����5�T��V�	*�(�f����)�-��-/��8lWq}�,�qR%1����R�����&7�PC�G����C�8����z9
��'�-��s������uz=��o;��������c��8��;$WU�	�K[�>�����B�K�d���?qe�.eT�3�^f�:E�����.j��XA���j�vv�u����r���=�j���jY��lW,�c!�����/0Unj�����s�AHhT+AZp��X�SF4���\��J����t����`|�7����!<��N����
���b�����_��[;��m��u�+��� �:��������L���a�Z�i�[�_��j�����yR��lJ%���C0�����1�d��j}{}��B��6o�����dW����Pe�H�*��
Ue�����H�eQRp�cB|^^bWW7H��;���^~�����~���U�4��c��VV���EF������H�C91
�=��6R2#]��3.����2(��YP�4����+t� 
�Z����{e��.�����u���G{�V�7~�V��~���5gqnIT���q
�=[C�����\��Z�/j�����P���0=�-�k��u�u�0�o���#~t�m�����}��������[��@����X���B��d�g*�e��}n���`%5��E5'c&���������7��Z��!�;{�S�������������~��+��$"�Vx A�le����tV��p�"re�;�:�`�w^#D55L��GPb��m��z8I�V7nZ����e�|g���t,��
�������,}��Y	������*<~�a�}�}��K��;�T�����/j���~@m�j�2����������e���U��U�u���L6LlWZN�$�e���W+�DJ8��-�������T��D?W����0��=g�X{Y� �.j�2	u���}>me>�����I�|��;��p�msp{��<d�]�T��"��nH�nY�,bl����d���_
��Z�"c����&�N,��;,x���z|5X�Vk��ac�Q�]~]���x��|���d�r��h�V��&0�$���������.UN����m]2�����"\���m5������_��o~�R���@��G�\W����gZk)2���L&m�bx�����s��E���p������9t^��>�L�v�c�"N_�W��A��|�����u�G�����]��:��S���JV�<�B5)(���SuB3�=YL5K4������/*��MM�"A�����eR���G�����h����|^�k���;���z�d��5��V�,b&*{�hx�z[k����YS�EP�i��R��:;�����%��K��P�������v}���QA���������K�5q��M	Y�O.��!aA�wO�� z�x<>�4�}k��om�����u�[=;~{����������e�V}*�3��X|,��-�2S�u����l�#ID���e�{�� _h1oz�@����8;k�(�hB��z��?���T����$�q{��BVVx���pq~r�!��������PaM����/I0���S�wq�+��U�����ao�;������:����s��3���+[���F�2�Z�� ����B�mb�z�rD��k�;\2�p�DZ|����$:�����-�������������_O_�?����=����;f[����p�up�P�a2��r�v&f` ��e�k�Z>��������a��Fw��4����A��?�G���7�n���j���/1[�U����+�4��p�������sPy�J��]i&f+�m_�����b��y����7����g��G�Aq�V�,�l��,K[9��2�-����r�ATW��(���y|p��%3:�����g�I��B���*J$g��
sZ�we�b2����|�^t�����O�����s�����~�m�x	��a�9�������?�~�F+�����J
UDA3�v�Xf\�,��UT�R���PF�V	E5 C#��`�Es3��J�������+�cR?ar������cAti�|[�>4��OI8����/����K
C���[�67�O�����_������*/Vj�~��"����c4iil���������A����JXa�[������
�~���0�6�xcG�2K�����<!�}1-lVU"�Ei��n6��[����>.	��A?���qq�����C�4�#E/g��^�z�hIn�#�I��\�4��b��Q���z�x���%����Z�`����3-YlC,��8@�`W�������������zF��TPfP:rK������q����5�a]fBe�0���8������4�Cb�.���
aNp����x�+D[/Q�1,�=�ij,�`���A�������I�Q�<+VH�] �����s��4s���)5-��VsP;�&|E���.�)\�"��L�$"4��B�����J�R"P�1� ,A�P����PI����[�"�����R�ajr�Z*�e�����8
p�������1�M�{�5i���kx;�/�w���=�K�d?�4��G���������/�Nx����F��V�����J������4�r�`|���7&/�3�\-���!�&��D�OF����1���I��zY���~�
f��]t�q�,J	6]�,�������q��/����
��n���xp�.��;��Q����S����\�Bp�y�9���X�[�~�
�G)������{?j�*O�+���X~���]&��^�= M���p(��2d�WF!�3��,�v]T1��T���������� �f-�������������'�vL%1�=�P
e���9�@%T��qA���
\-!�4s���/*F�VkO�uQc����`c�!!�Ib��4�L���E�?�fk+��S��YlQ0R3L
��	�EQl���x&v��)����#b�w}�8��Y���cc���j�=j8�)��0A�
�B���RL�N0Y���b_�gk}�B�/����k8���,$"9z�����g�)��z��� :PbLU7O�nC�1f�����H����B��@������Q��^���=�
V�b+f���<��g-�Z�L�I�*��&�PA�:T�4TSd���z��G����
�UrB�[�M�)���VK]���B>ko�g���6�z��1{�o��h�K�����O��W�^�����|����A���t]������R��R%���.���j�t����/��r%C��9y�?B~��]����_��Sz	N9��h�ow4�,A+�Fj�iqs����������|���j1�����%��}���3�{�}|qp�����4����Rft�G����/F�;��b�S�����U���U���7���c��j�t�SE��� F��]`����1�C���j���;�[3�����������As�
L����I;�j�T~\nX��.E
~D>��`nC��>
p-%�yA7�n���]��j�����S��k�����W'�@�O�P%�B�S�=TG�5��*o	�)���h�)c��8�,����H�CZ�'7C�S��\?���i��*�3�������cP�$�u'	���J�YJn�
]��&7�IDP�`BJ���B�-���CSV-C��oO)i[H���6h"	�$���^Z6����l������,?g���h��e1A8�,����VB�-VE�XjU�z��n��Dk�[��[��)�����b[��_���z!��[�:�h�ZT�����&��&]��������0�4������+�j'-d,�����U���;���N�P�|����a8�x��"���c9\Y�j)%h���-1
K�
����2=*�|A�S[\�#�_�d�X�`��Qq?�������rYLA5�v\��N~��Q��K�7 �
%�*w�
,�YkW�G�������_|����l���
H�+2��Q�������~~_�Q@s�/�|���oWu���g�������M�����9��e��e���9�?�eV+"���v��\k�$��]�RDI,tfu��EH�������
eHi���bbx;c\�H&�:#^F8����0�@���`�� a-e�%!�<bD��tf �� Z�7>�����?��p�������������*��5��??�bc����}�r���g��]��}����I���)~�������5�1����:����9�q���y�����V�,�s��$I�!r��e���?_��q5�?��W	�����$E���,an�����?��� )��.d���In6(��~���\��|=0f��(n��a[��JQ����b�8VS_�����KV�GLr�#���_�������r�������O�������#���*��A��+}��������]�U:h��z��j���>�S[b|��\�8E���X�4��+�� �-.��!���&��F���BNrsf�>l4�����=���\�$|r?���������.�����S�����r���H����*����o�.�:~�}e#�/UT�9�3��/�M/�5�b��?��,���G'{�������f/��z8�v������e��'g`-�P�_�����s����f��f_`�w7��-��������Gf��,n�,���J��e�w|e�*���(aX�%7Qmj"h�E��k,�O��i�>�����A?:��'c}�V������Y=5!?�+���LQ(R�����������9�����CTG�tMD3��;.�����%�������`8d���YpBvQ��=��d(�N_� �����+��$''���z��[�+�(�t�B����th\e,k�������q�D)b�Jhq�B�Dh�c�Lw�rS�;����rd�%g^��	�������%����pVH���'��a������Z`����J,�f�5�B�f�����l����7J�f��vk��L�fE�$d�P)�k�5�k�:c�~Q/�@�Y����*�6�3�}���[������X���h���.4���T��/P`�s��_SL���S����N�KG)Y-_��z�+o���KV��9�m|JY����}IYQ��Q�dW��j0D|K`��@��T�z�X��8X
���)�3M������P>��R�S�r�_SRd�]dY-��1zy��p~ �{8?��������.K�j�Z���B/�ZeZ-���X	�k�S�g(�����k��v10�[P �M�hf��dKF��,e����k���.�� �9�����*����"�YX0/"������ Uf1r���63��IF�Y�'t,8�TU�0�}��H���c�@m����6�T�X�;����!4����M��,U����G;�����h�1�{k^lZ����|�S��~\D��w�������}v�{������5E�G�C��z��7��]���nln��~t��+�����hK�f�l���jm�U�7*�����]�q�_��q�����|��bY.tw��xL�U�AZ�/2�mai���_z����m�Xq$����=����_����z$�Z��~����j��s1�(I�8��w�([%,(�V�,��e����,2�?���*��i[Iy8
�d��Q�4t���^]
v���h������������UX��X�Q/A��v����+�F�zl�R�fA%!�G�e�1@�j��@�h0���'R���5�_\���67������m4�q�q�uWM!)_l�q�k���GN���BB�}+��c`�����u#��d�CA2�0�	o�I�����I>"6�>"�����_������2��'������ezc��R^~��O,�t���_���d����?���0.����G��U��l`����i~�����DF(g�gp��bI�B�l�-��k�)l]���R*D5����Y�[*���|���z��^]4/��N��������(���.(�u��dJ��
��\���y����j�_Ad�)���51��Y!�$�>���[`�'�����?����>�y����������������FY�ZG�)�������)���D*"V(^���P7���HE(���En(6��>DO�`�����JL
�����s�'�����:���]��F���I�]g��r+��i����0=��`��$�a��;�������Q���v��ZI��~�SPLI7X���F�����C'x�bMzh�J�u=��(�
Ie���)����/��������*d�����B����.h������E!��0���a��E5�{� U2.�8Uz<u��HDSb����s�d�5c�i�U�Xb��zX�L+K2%����91Lc�q��3c�.x�T����L���J3�������� ��q��� ��o��u��
���f�����j�&����s��	Nt�q�)!"c))��8�-������W$��a���q�h��z��p����	��-c`C�i&'��-Q��7��o�W�^���U��B�	_��A�
���������f�#��"P������5"FU����FC����G�T�i��Lr��D�h��������
Q�7)�D��*T���C��'lG:X�X� LB�:gRLv�����U��
55C�T���c��E\X���;f�J6�W��* 9� �#>52�)��E~=U��RQ���O[e��"?�6v��z����cg�h�-vez����&����]8"%
F��#��k�OTB�5�4��9��Q�0�Z?l�Yf���cJ#f�1�A��QU���Xw7���'*)y�R��o-��u��\������2��.�!rc��Y����+o�i�hA�����������w��L0�A]��/���Tp_���o����TbH&M�W�**5���u.�H�n��]9�������&�4Q�**�9���Z�4��5��)��i1]-|��>�L
_B5��.�����'@V��5��_���Li���]1|	����d"���^?�8�>�����4��z#�MdZ����@�q\Jh�C�����]?���kw����82D��uJ����������("�AC��1�9\X�T�"#?]?<�}w��'@?��0�����>-v!���a�i���e�K�V�z�������������)��&Zg,d���H-
VC�����(
�oTS�f`5���`��\�`�a6����:��@E�T�A��}���+���A{����iz@4��N��db9�G-��~���\Y����
��f�J���b�O���v�����!�t��l"�6�J�e!����o��Y5�b
<[~r���P�H,��I��S��q�
a�6���z���t�Hl���[��;;,%��R�
Qe1�*��,��t����~%��/^s�
g�a��Pa&���`�QI���h�x�|��)������F���8����Er&9Y�(��FIc���9s^n�3����h�������m�@�M#�*Y�\�Xt�r>���.^�g�K����,�^R����~L���{�H�B��J&[	��p��.��Sd[�H}��-�~x6�0?�����W��:�~#|�6��A�G�wwR�_4�����kY"tw8���GG|y��(RH�&�����`�V��p�a=�����ZG������777����lK|��l����"��=��|�?J�:C���c���?�_n��B�	 ��l�sr�7�����\���f������������I�9w�������]�}t�����R��i�[|h�%kz�Z��R����<���xx�V�[����c���y������_�.���/��m��<y�������c>��U���u%�:�
M���������p>�v����.�/�/Q���'��|����n���c�� �\�~�����7X��x]6;?�����������n��������>������S�v�YF�����i��1{����[����9����
�� lTm���g���`���=������c)����!H��ph�[�d�7��N��0�>Z��P[*-F?O����������t<w������77��}�����^\��;����y��(���]�u�����g� ��DKDn�8��mIU�:�i�[R|���i>�Uj��U�1{�p�������6��z�����Cs*o���=#w����h����+����o�?�_J�{�I�\�[E��f��Y�^�|yvpry�x8��b1��'�5@��������@�\5�P|��Ve\(��$�O�����{���C�/v���::��t����[��S\<�b�����a9?9�������1�(>����@T
j�8���#�S�3����z���/��.���|K�9��>��:���n^�����(����D��p�'�s
��d�3�>>����,)���5G��ad�[��(tZ~�)�����xi}9����?���������_�����������h�_����1/��]����G�*#�y$|^��ap�Z�q�C�]��4&�	��:�x�h������u�y{����r���'G�<x{�p�s�z��u���	D����*�����!9�2l��Z�P���)��5�6g��9�1
�DA�om<i#���ovN�~�������������'����z�]��\7�Ei��`��jw�1\�x���������
Vq�<U*������PSl��z+�1��S�u�����pm�\��<>y�����G9p9e�����<�]+R�AOl9]����C���<1+Y�SY�NO��g^C {Y������0�iz
�O��=���=}����7^���������_�����f$��[���W����b � �g���B��b~�e(d5��;RWGv���"Q����5��j������������>�=]�|c�?z���H<x�q��/[G��~����Pm4*���:=��$�8���9�'������'i������R�MZP2z�8�tk����gw>K�����P��������/O����yX����G��WQm�a�$�@��X�F
���rN�t��T|�a�'M���,���8����z�~�-����?|��%��������\�6��R�r
��p��Rb`g{�>���I�3��}��S�&�J%Y��I0r�@*��MM�����B||����6���������������?��oo��~c���������@s�l���	���R�����N���q�Ti����5��q��r�9���������J^�������'��7�m>����i�N���u�a�V�EC�60������~�7�n��9` �eX�!��z��i7h�	�9�����������������|:V��~������7���xL4Py�	�L�.ZQ��?�^�\�;?9�0������r��DGp����VR|��S����c�e�������?f�����{������������7������O_��4*"�A�����QUh�M�VW��=��3s�d��6���Pif��dnrEe�B�8X[/�������?��������n��/���`�u��^	'_�
����D�\�����z�_���f��]�e����Z�6�o���`T��?�W�G���b$9�d��J�|/e~����5^L��E17�!�7{c��,���e|��11�N�����J�O�����$��Q����������V���[G�>��h�%����L*�t5N}r�����s����O�9��g������J�����x��f��u<J���&.����+���[�����K�<�7O`��Q��d4'�lV)�WhL�l;�5/vB����!�N���J�.���j�%�%�3Qv��0��N<4z_��
��b�#�F����&@��m���I��^:���p��~*f�-w�����dq��K��g���~5�Z
gB��4�9�m_oiSA�e;6�P������!��b���)��
�!�-*s�����6�����~�R�������F�-c(�$68�v)���,P@3������$�2�1�0T
\hO���b���;����`�<4��j5��;�:2���J2]��LHp�����t{'C.#���\^��	�`�[�����V0Ta��e,$u����*-c��=�Wb���]d�6k�/�V���pJSL��<����7U�ttQ��f���,��
���FCl��B�E�-�r���Rq���D��
M<��T��3T���KTgc060V��IM�	���pk��K#~����2��eH���?jq��c�����P{�v[�)��������G/��VH���_���h'>�4\!����$�H`L�
���B��Ou��}
���7S��&!�i
�s+$X�zK�5���+������z<��_����r��S�ya�ToD��f�����
U�
����*x��@9'^�7����$��f&���Z�����3J��Q~��%�G���/�*!��;~"����
6�M
�PG���v�
@�����J����5�.��������<��U�YK�d�6���K�WOj�p�4�x��gB���R}���)��6�/����
����(O�lW��]��r����EV�he�T��T�*�~�����$+w�_�A���*���� d]	���QV��/O&�_�fX�Nf2��4��j7�;�WXJ?�)[��#�f�n0����� n�tT��;��U�h"�T~K��h�y�=2����R&�Q��M��^�(�Rx�t�%W�>r]�c�j2�G����V����Jy�����[��U��T5���1�\wu$��P���K������PX3�i�0�X�6Y�Q_!��>�2�At�Y�����a�c�}sC_�����m�^���C�����X�{�(�>�
�.�?�&M���=��nS)H���S#1��$S#����03��"1���������t���\������/���o}��4�E�7����"�G��U�����?���J�����fO=�M]gXQ6�m�����M�S��6L��d�1EK���kX�I�k?}�o�a���'��~���5]���?��J��V����`B�&!���lni{��zLA�!F
d���~Vn?k�w�	���\|�z��v`/H~D����N����DcE7���*������eJ
A���a_	,���t�^)�+?��\�&�MN1%�DAaJr��1���-;�v�w��v�cVX��PK&��%��	�2I�������/t��WC���z���U�������g<z����&�s�c�=��5��z.���'��[��� �P��l!e��H��
)�O�5A=�[	"=���i������2j�D�������.�I����F��HG��KP�ZyX�V1�����[I�-q]��������t��kB	�j#�n-���1��:��S�f���V���F+�:0�G�Y�;�j%*xSB#Z��)���prD��G�����!K��7J��4�UR��*��k��U�	VI��aM�\��������e\������U��W��2�����>Aa��uX$5���r�\�e)x�Fv;�Hg(;��!�R��e��!o���?��P�yxa��)�E���?���g8Khg~V������������/�?��]���VJ����Z/E���E���|w~p��%^�?���/_��4�\�:�ye�z�������uW�!<�������u��J)���F�h
�>��I��!
�jT�CA��f��-7n���[����U����+�8�_[n�
K�bwCp?�B0�@�m�.h��-7���/W���d��UO�������f2����O�
bq�U8E��P�S�M�7ty�;����Fh�aw�Cm�U3EA
�\�e�5�B�����m*"��K���h7��X����m�!�t���a��j����_�41E^�
�j!���C��;��xh.
t98y�%r+�f\�q���3?&e~��$��"��#�1{��U�������2����1D^[k�Q��HDR8��k��	��k�~ek�u	� Fe<fS/u!>�GK�i3����Y�R\�u b*'~Cd����6X���7�X-�G,�x�h��e	���lE����c^�|��S�p%����T~^�^��y��[�[52���}~��G�Z�;��Uo��J����\���F���F(�����S�����|��M�P����,�Q�kYoO���?���T>{�c%L���G�s_��������-p�G�1���=);O�b�Pw���xa+�c,I�_��c.���_{B�2��D������@)&V�����C���:����V��a��:#���L�`�J�Tw�F�1))���L�E���nM�m�F����/���c�1N�c��V���m)��c7�1
z����
{q��{?
��S�N�3��/�#c������U��,I������q;�x(��^��b��EE��/F_�,�0?�N�T[Dt�����i<��O~��3s�+uppFD(*�� JL���X����nX �4n
��_C�>j5�6l&������?5*F� e��/X���D8\	`p���r�c�@��"-e&��V������Q��(Q�p�^����na�E�	��RW&�s���>��Y�\��E6��H�Zt���p&�G�X�	j�w?ZJ��h�:�
�)W�A�h�#:��,!h�1�����^����gM8}S��t1S1Q��/���$��C����
�2E#�)yK
�����;!��<	�f���Q"�
�}��6nk����_zx�3���Q,��?:�P�oL����P+M%]gGg��XQj|/�	���L��@Q��F���!�fZ�I�|<�WWF�io[fzq��r���^�z(,��..B��V�	E�d���,��}�s���}���u��T9�=P�
z ��
���� �'�y����L,U%*�uDMH�F�<��M:���B�+
:�X_(�V�T!a�S���.V0�U�|�j�r���=h-����Q]��w"%�����qL��ALb�umft��B�SPV;P
�����iA�L�)�\;H
~��+��,�9\���*��X B�;���5��GqE��Y���E@$���B<%�{��}�wP�g����
���-{76�{"�B"����>����������Y����z?�c|D���"\���}�
a�����{�<���6j���O
��BX����F��s,�W&2����0A6���oX�?t�-�/��/]E��>�o�C��G4�1��C�m6��Q�H�-l
��Ie`�o������?�?��#Q�����b��$?�?^�X��N:��
�?j0�F�1�Q*sn�0o�A�;�2�@3v=��i�_��n\/*~�4��-���f�M�����0�L~��J���_
�qPY�E�
\��!������2�[�����m��:�#�t�e���v��}��=���&�LS��Pn�����-<��w(��mU��-��Z���i�=Hn:�;Zg+R)�l�fK.{��j���<_��i���\�_7�B�2��������f���ad�#j��pe�*�-X�HJ&�Ca�Y�:����3����W�
V��RQ�!����D�cP�5�x�{\��g�����O����u��c�x��tO���P7	���Oz�h��P��
c����y�m.`�>dFB+&aE���2���V�We1K:P-Wj��bQ.U[U�}����-X�+!��~�(���*J��8�|����^10�f�j�t\~B�����	6����e��N�_f�J���d��o�����f��i�����i���D�t�?�a�_��<b_���\K�^��`4}`�n2���\�m?���d�����^�Tt�:�������f��zX��|��35�Zv���Vh&B���J���)F��D�����I7����ZFlFsG�0!}��%�h�g���V�Y�%�:P��\S��.�����,"-�Y��D(��	G"���_dX��>�WB,#��5wKD�J����h�]���F�[%)�+ ����f@
tt0I�Bbx�*2A��J�����8gBQ6��D3�!�*	����R��@(�������1�IG}�51��gU�ii��2e��b�k����(>����xE�dU�2O�
�5��Za�IH\�
P�����y���pU����c&tghu���8��� �:�Z��������P��8�'�4>��ef�T}�Fv:�gw�h��d�����cH��E�&}=�Y��%��,����K4���]>�B����30����������E�4�~\-�SH��=��C����TGe�P�u��#�d j
2�h�� ����n��#�����0R��7�����UN1a{��h!
*iIBMB~���r�I��������# @�3ep3���c�8������:���t�������B�X��n��<D5��K/`*�%����t��R��7�Zf�������"�W0��N
Q��X�B�y�4KjB�
���B���Cd�d�(�y{uK�O���]��+���4���2���U1����9g��D�e���+X���	���C����Z  �i(!����0v#=-A)_����W\�2<��6��j�<��X?��`ZK�?�	f���0�_.�����x������|��[�zAK�n�	+*��D2X����"��1(?��Tv������?+��s�}�W����i�������'��`���<������xj�r�~�Q9������f��?������������]+�G����P��I����X��:���+T���BX+���;_ipr@�]b���R0��`1s�-�<��V�M����L�dIA��\��E
��j�4@�� _�Co��?y�|y��?{�~JV`~
���+�*����A�LUA���!��((���D,;��{����|�_�D���~��|����g��iUs��~T���VL���@T�w����T�(Q���B��Db���}��`��&Q�����niH
X�%���0t$�����G#<jB���D����8�'W�h%���U\OX���"�Q&��Lg�Sf3�
{����������&\ll	�X��Rf<�O����y&�&�V�L��s?����<�&S�V��|+���{=Ze&,M� ��
6��jZ���������D�N�{�BS�
D��bhl)V"���,�����"��!&l&���pY �`X��%�j��
`��U���m�%-��~�?��%�5����:P�"P����������7��ju���.g�;��*�
'0�F���	��[���>�fDU2���a�u�C�j�����h��}1u#��.�?��}6��<;~s�����b������<
��!�|.���������s|��������E��������J5���x�s�Q�����-�F��!�Z�T���Km��V��h�����0��V�GR�:�����u��~Z+<�s�"�8�w-����l�s� �,]"h�*��>��:.������*�?��e�k��F��a8�70�xe$3F$S^��T�@TW�,E�"f�Y�J��������/��������a����6�������6/8f`���V�2����Rl������z0��7�����_�(u��R�������s~��:��������<����1�d4�9�q��������d�*�+3�Qg�d@��%S�;*'(��A,�>��������������S �q���d�i7@b����D�P����+^N���XYQ��
-���*&����|*{��d�:�r�e�aNGk��$��_�M�jfc3����d����1�L�*��XE���YS:d�s�vO�����x��'��8����%������s[�]y��R�����Um�Y��Kx��N��d����-�g�.Z��V���9��at���cRCM����pG&��*y��^�t h�������.#�_��x(���+t|j+j"��{~E�����!G��"��#���k,�h�K��t��������p�)v�N_��0_�@���N����o-�&J���:�4�k+���}~���?8��rO6�b��k?���>}0�����_�pv��K?y�q�^>�w���]������_�����g�o��P��$���n��o���w3�&��R�.o��(�^�`#�u�N�)��U�$�)�
�)*Sje<Z���%Cb���jJ�������5(�jL�M2����n�BQ�Ag"�uF�J�J�IS> � -�Wl7���`���')�#/���,���Rb�u�M�BA4�A0]3����P���j�q�#�+	Ei�D���ZDWYr����������n��z�([n������������[[O=l
�k�����=���(������k}-�Z>���b�������_�35
�f�hj��z���Iu5*���f���KL�XX���\�1�
����"J@�����1���?����B	��8�x�����g�����"�3~�D}m����D�k-��;��7^"uX�j��O)7Q,�IMwo0�RC�dS��wC���/3�%B�F��7�*���[a���/�U���T<��?���b8���	��J���!�f�f[>m$D��-�R���7���������X�[�z�6$>$����-�|������N���_EW��7�ww/v��o�9�@CUB�4w��������]�6���?I��t�km��0���4��}���Zs,�������=���_���@Z}8�r��I�E|[�!���9T��C#B���aQ�]�C�}u+h�/)�:�|���*7��Z�b��L���������C�7��h�xZ��UKl��'X�
�I�3>�2����H�^��%�����a�N��V�lK<��0�c�������v���XG3�f�:r3BQ��S���*������Q���]��<����&QX���L2jZ+9�*�<D�����������_�����sR������p����kB�q���� ����C���6��Y�c.��h/���N-��|xY��EmM���&_SQCLJ:����<����x&c�����#�ON�O�Jk�����g.�|�5���7T�`}���7�F��	�R����gS%"G��A�]�
h��F�~�;
1����l9T�C,���*{��W
��"xe���uy@l����m_��e��������������s�M>s��������%��V��R������g��FF(�N ��H��+h�����3b����gD��TH�CO�)Z	gj��Yk�
l;s�`�s��>�-
�X��;&m��
8u����|�Lu������������O������������>^�>p�+*55�?;����(m

��_t���4����#����>:>����������xfE�
��5ck���4����q	�����S�0n~�Sq����Og��yz}�������(�ic�9��A$�� �u&@
s��2
|�����.�U�1;�<B(`vqr(��z�{b�}[`,�S�g����)��S��W��W1B��Ya����������?���+|������������{Oc3�1�Z����d���e��0��L��%��P�SWA]�u�}G�j�5*���b���Q��pk��2��@E�6�>VZrF�A\b����h����"��L���;��r"Y�V�b+���V���l����x�k��������vV��'D�����2�a�T*��Va��p�L�%Ri��H"��Q�����*�AK�*Dh�7a%�j'��,f%:�"�,������G�Yp�*�:`�(B�`�
�k\���p6�`j�V�mE8/1���`�^Dh���F>�W�"G>X�f0��V�c����f
���Mx
��	W�cF�w�ki��@��������P�{��-��������^04Vp����s�����ZmH0s�������sX�N�M�����V�����}�V.���:r����J��:)i�a"����2�jM+-Q���|X��t�J2AN�� ���o����2#!�h������G�9.�U��� ?��@.@!|@%��
�!��K\�d��HP�8�.L~�K���tAK���$T�c�'&h
m��"p�F��(��_}1�1Y�Z�����A��Li^���J��X�,��d�+w�jm��!�����p���me�9������a��y����,q��h/�l�r�l�VE�8qm�Y�}U��g��@Yh���4f��-��$��IP������b�O������tp�.���~�������Z���d��+">?�����0��mmT���0 �pU=�B��u0���6\,l!u�Nn��(�}e��OS��5Jk|�����g���ky���Tfz��fn�_|���>N������P/����g�?�TN?&<Mi)AA���0�u
<������K�d@���&��T�%����X�����qg?bZ�m��o-������Ak;�1�
��HT4��lKI=�X*����T�)��8����j�#eh�a�_x��?f��]��BL�@���U�����7�o�7�m-���/��7�?0��H�_�2��6�A�����3���H��:��85��Q�$��}�9WJi�9�����+�G�`[�C=��>"5�`"��3�(7V���uB�Q�c�r�-���t�}�c<�������0���uN��1*a�����*����a3'T��Y�>"��q�#���h��_9J[!�(�����q��ef�>"-aM���^CH/��1��+B����<���s!������uJ���> ��_�^���n���+�eFh��}��~�>Y_��93X�����1]��hJ������G��Ff�b�,����������*Hfd�����rxg�P��!��
�P�����8����9���d��p���+�K�p3PS�<�i���r�70�M�X_g�| :L
sI��r��
'2������j��F���R+a�������rz1�DW�A�lk�X�������kF��`uz�.�*?o
z�g�6��s�Ke��j8o��D"�3~���
8a}�\ 	EX�_�Nm�>A����������y�HGHq�����}xI�x
��������Lc6Yq~
la�>��aS�����+s&,,���R�>�5��T��L�O+siA�C(��*������F����(��������*�T)�dfy�}�ke��f�9�(�����+�m�oL~>*s��Fr!*�RQ��h�?S�
P%���W�3%���SZ��/�4u�#��)��Q��[1g��p���%S����PCryT��`"=����u���#�M����p>*sX���+���8���@/
�J��<_9g�r)������p3��c��������<)�!�>����"R���� __9������U�E
������I`�9=�sna��&���^b53:�(p@���2�C�����,
h3���D���0�������
��o^*P?)x�����2�6�*�/�%���a)yN8�������8%���O��=;������.~�*s����^�?[��Xc����@J0����4�k����uZ��
�����������<]�^���~��U7	O�2W���2p�����/�|����IyN���U�������}�Z�����Hkxa��s	������R���X3��&����2���{����[�|��6�
~�~\4�9���������}Rp�q���~v�73f�&
s���	��G��8��a9����Q���A�p>����?�}�������R��:��_Z��yk���>j���-�����%h6�MW�5C��/�_4��x��EX���R�#z2�����3,���9�41%��U�>�*�&u~A`P���2g`h��,�?]�o��0������Q"_e�N��U~U��x�����R����:�V��+X����n�tz���\�.CD~>�~J����4����+P{��W�@�-���B��_��j�#N����A���aLA���_$s��?��q�8E�im�#��
>$h� ?d��)�>���9������������TR��� +s�� ���]Y�O�=���F���}��X��//�'�t���Z��oV�k0�����E�|h����<o@eA�+���Q_�_��U�CL2!�#��k.�G�����;�W��LL�e�R�����1a0>��UzC�Dc}�[9���ex�J}CU�/������`M�����bJ~$S�T�3���0�����:
#��������3�S�����A�(��*��D�_�X�.�Aj�7CUn��*s
��)�B��\����R�e��de
��X�|�A��[$?�����9]����
�LY�q���pbE8c�9�:b����pzA������g���|�s E��D����)�n8=����g@�2&�1f����Z����R�3��+'������N:��A�� ?u!/IE���h./i��
2<?c�9��`2���6+$��.���U��{��-��V�}0���y�?k�S�GT���*eyA���>`e��"�&&��rN5�B�e����d"cI{���'��[�-�ZYS��W�/�l�I���J�)���2�
��.��j�q�1T�&�a������9G�,�������f�Y��'\��s����x_U��`z����g0���sg
���TE���`k7�:��Q�����s�ai���D.��V���(
��Y9���k������6��@[����2��j.��U�M��\?��=����M�9���R]�����[��lJ_C�/�2��S:2
����9v�r{!���k��9��*�����P���_��������+sP�e��'+�xk�'1	�� �*s������M�K��k"��Q8a���\
j��A�����F���O{&�[�>^����O��/��D�����b��9�[�[��D��kS���.���0�sQ�_���Fs{?���Q��X�%f�qw_V����H���ky��%%�F^�����sO�8������2�����Q��*����[��5�����@���v����(*�gu5�R�v2�h�,�6��6juc����������O�<��8K��oG� ����������`a1,����������[g��Yv���~��������7�4U��<s�-�wwv�>��O�66�[o�m��N�=���^����r��
�8�E�%��w��l"]bz��E��wS�=jA���9-��W���N�w=�.!1?��	M��y�H�&�/���"��Ps��U��J��T�^�r5�V��^seHW����tU`��aUJ� ��M�%����=G�r��w�D�1H�iO�}Z��7(c����.�p���<������Yv^V����d�A��������L���j%���e��Yiu2**�e�:B���>���R�	�N#{G���]{(-3�d#�q�G���h"�,��.��������b�4��\o.tF%�����5:}�H\��I*�&�N����0Y��: �&�������Y&���_�(���< 3�R���UDd@y�yBxG���1!>������0���������7���;&�-!r}��J+8t*�
[���K3�Qk3�dR�����v�h���[����l�m�+����������J5��u����y��������G
.�rK'4�[��0���#�6K�]�,��C[�n�<��3�	���o�����)�Lj��=��A�E���Be������y�#����������F
b*5��>�D��P
���a5�\1P�.�q�|���z-�����?�����pmx�]|>-�����?�6��'gs� T
)�Z8'�,�_YK^X}��=x4��|������n�n��w�~n�,�g�
x�����F���������@���p���3�/�p�������^��p)�� p�����0��X�}��Gm�$2\nh&B%'.��,�c��23X�J�<��H��"����Ul9�#%D�dN+��b���x���'���Q��[c�s~p�Vp.�$E��U8R58Z4�T�����#�Z����-57�7Z�m�U�h������\��Z�� V��r!������*0x>+����7(����07�m+��\����;� �~�(
����
��O)�K2Qt�*�2aB$T�o����i5�����"h��	a(q������A�~�e�472-�5�{�
����`�R"/��YIo���^��FuS9#��${@i.��B\i[w�R�tY=���������<�32c�2�Q��Ds���R�c%x�cqOZ���?md��%���#1�'W2`[�v��'W39���a��Ql��$�L���2�Up��	aN�s?VW��:��UF�������V�cr3���#
������<R�5�s��M��/&p�?�1��	0(�����!�	0YRMl�:�M(Nn��
.�$�WX=`6p����V�m�P��I%������T����:���
2�H�}�T�^"U��	�j��eMt���]��Yq�Rnyq]���c���N���-����~��r�����3��?����j)_AR�(Qc�	W�d5����Y�9���@lL����Z(Z��&�� ����}�. !4���~{4$�i�<$�G�u@f�H�*vI����{�E�[�"R h�0�|V�h�����-�qlWf��D��e
6MH?�
$�G��.V;�TJ�r-����2�Q��)�����U��uh�g	4MJ�^�b���l������E2d$Y�]�l���XN�
��-�6�?���N�Pr2%�I|>������4c�L���X?��j����V��� ��csZ�7Z��������&�v��,i"��I$]j�W�N���kr������������W^�-�����N=X$Y����X+��<#����]W)�������y��"0��������w}���gG�.���x�V��=M={?Z�U���]�����AmraRX����hc]dB��G�M�^-1Y>]���"��1?Z��UJ/R��T���,�j��������BZ5'b�����\
���0S1�`���r���i��n�����+���LH��.�5�G�����`1������)�&���}s0%�0
47�����$�V�C�X%'��j��_9��6��gg���B�B��G+���yr��c���G�B#<�a�|	��q���8|-V�?�p�M�3�O�a����(�]g�#�������3�����J��&��g��&�Y��+�L?�����M�M������c�m�4�@�A{���]�a^�c��
�K����s\�aU��?�o1��*�H�6���V����R9�#Xc��<�f����Ei����1
�gAd���3?����QD(?* m�����QV?Z�
���V����"��G�������qE���^�P~�p'��4�K�&���w]�0��OBL��h�2�kl����w��g�k�������=�)�e����%7���aP��i�StB,@������2E3����������7��=��q�}x�e��!�p��f5B|��z���\]�A��*���U����NR�;��!��JA��+d��qT��{���q�j��/"����5�o0���oX�-�/?������J����������\�rx�s�C��pk��\	?�?����7?�����
��I���7e����7��+����Q��*F/�Zb\MU���Vv��47��b�d>mX����`/q�8t	X8�
��^S����;�g h�z�Z3��o��'(s��
vu��WQB8�	k/��+iPh(�wi�y|p��nF������$��E�����6Qf�(n2,��g����G
:���E���������G��8�H�'{L����\qf���s����	�"�E���.�������g���AK|yHV��J~�������g��&�@{�g��Hf����gu:h/�v�w�^���+��^��du`q,����`;�
�S�m��J;7�>r���p�u����f�dE��0�)�����������0�/J��G_�j�\�k����`#�"����X��b89�uM-o�Y����o�I������V
��H���=�n���K�f�~����q����k��1�&�������[_�ikf�T\��I,��h��&�t�BJ� �t�-�)j����9	�2=BX��d�E�#����e|�bQ]<������;Uv�����/g(�M.,�`,�,������=*�*T�&N����=tm�����P�1�G����ie��f'��RL�Q��f%4���a��Pe�t�^t3��k���}a�-�����'�k����%���7�/jA�$�pGc��Y�����fkW�C`�9��)���j�3b����gD��THuh�����O�~*`GMH�}�����.�H��ez�����k����������J��6%�r�/��:�7-(�hb�'>:>��,��k~�������Y� ��E�*����,O��(oL-�ZM��K7��o��n��[�������X^��imt������u������U^&�_�}�=�����_!0�8�]���=1�.l�������*��%������2������l�����kl���0���/���U��5�o���wp�u��]�`�Z�u@(�
Oo3�Q0�x��S�>E�G�F����ey��v�!-��B�
��j����!�O�z�Eo��������M������W�uv���k��7������������4L=S5�&�U?jWGi�F��������2�Si2
�a��^�<�����/T��;��yL.�y�&���_�qO��_��@P�EQQS9�p��5��$��:J�����@����2<��h��6���"���Av!\U]���
��!��+~��s<�K���Q�$\K��S����>��^}gh�
���C�	�IY�VD5������h0.�Lj�R��q�G�D���2.�m3��}AG��LW�����Y����[F
���-#��R��kA�������5:����zTXu!l(���Zi��%�8�P�-}�T��J����i����phCBM ����5U|���X,�X��1S�E�6��g�Vp�C��Pd%�����L��S~6H�w7�OZ:����d�>
��q��K
��H���� �%;$�������0���f2��k���lk��S��I��S���n��^���U1�LiV����*�,
�
��iz�X�)���_,���j���L��iz��G���4���5U��{X&V��Cy�M�v���cI�����_Q�~�GK<�k� ��� {#k��j��.�W��j�5���P2��)�F�?J/����>:��|�_3�N�D=���W�����Y|���^O������.�Q8�f0�P/�����F�
��M.�t~��rL�\N�j(�"
�U�)f�5�A�P��r�0{�f��1{Hk������p�P���������c�����W �yk�(�Z�O�&��;�F+/'2�hqC|��e�
/9,]�'e�5`#��Qpi�(4oS\4���x��.`	�:�w�CM���qy�,p,�Q,�]s�����4��������l���D�?6(�RH`��(��l�e�N���Hzt<_e�J�/���B/M���6"�n�����	��

s��l���m,��'���G�$d�Wf?�?�������
�F_	#�������]?�L���rR�Vn��an�]O�����;�6�R6��8o�(��	�L8�M�d
�^�;��][u�[�J������������U�6�=�������p0�T&�d&�D���7(,*�-aV���H�Z��IL$d�1��x�}D�����R��Z��
��{�<����T����}0~N�XI#0�B7�vK��ia���k�%��)&)�|+nVbC+N8��M`���Lagp�(7"���������L�`�
.�0bJ����W�A*��v�����#�[c��L�2��+��E����/O���x~��Y��
��A!����F��mE�2y��pxHp
_����=7�o�������_uV��_�����_n_\���?��������f�wp���^l�g�x��������/�/.����=8��O�?1�������|w~pZ��qp|~qv�{Q���d����J6�N��rf�j.y"�� ��%7��$���`������>)�De6�G�����D���t��Id���]��$X \)0-����hu�a�D���Y��b�2RKn	_y��	%�����1m
����6�bBA�G�S	�Z"��q�O�$t�V�
T�p+����%WdX�*�T�0/�(��QO���VEV��^k	OQu�Z�&���8PzU��U��$X�]!�&����-6)��x��[�[lM0W���[��y���Z�����`F�3��*��p�L�g��rn����??�\�g`��M&�<L~����H3D���e�V1-}�`�L#=����oL��b"���
fw��f���7D{���H4*,N�3�(

��6�J��������R���u�*\��`/��7���9������GV�S����
:$�&���v��o��I����&��i�[X.����������Ew%.�.��O}���AM�0�z�U5�Y�U+�����5�U���N
����j�P�4��T�n��T�le"kk��v��.�����+��+�kG1�Ov���?�A�a�&~�M�w0�)�G�A��'�"�ue��-��
���j���$3�{96��� ����~\9����;�L�����������n����7d����&���
����b����?W"����o�3E�)���8��#����_��:�t=�c<�L����;���O��E�c�[���v���s���{�"�0��J��!X�)l���*����_L��q0L���AG�7���lo�G�������~Mj��P�_l%k4��DZ2U]�/��w�_�4����1��/}P�Yk��fU1���,�SL�re$
���;�t��q�DdF��S�^%�������R��8��kM�0�J��!2@�@yZ�-w���������H�EE3l�<+S�L���(�&�����xCq{�K����-V�ZaTO$\VY�:�(~pYO~��$OVc�N�Vg:�l���Yu����X����m���]
���F�������X4����)�����2e5�oPi������J3�WV��zL3E���C%����
X-��5�
X=��%�
X=��Y��U��Y��U��Y,�*`qV�g�)�
j���^A�2c��@{��L-��.{���fk5��<uv����Lq���i�/6P���'�HPNX�9��%l����&o�������#-���:����u�^.�%
��%�i����m,)xC"��-i���1u��o�����~�m��.
xM��(�MQ����'D��^���)
x]��(��(��(�uQ��u!��B����R��FM�0]��y��*��bJ�TS*:-��d+I��yA��^���2����I�q�
S�����Q���g�������M�����s�Z�T+����0X�)	x���V�D�>D����R4%�*`�$j-e3bu��������byl.UPTY�k6�q�������+]C�������
q�)��j������>d�d}Y��}HiM�"u��C�����,�<�)���!���}Hak���4��{��fmA�pE��f3��k1�Ys���V��}������&�����CpH��AI�?'W���XO�C���-�����T����S���	tz�<w�CZ1����!c�h]�����L+P-�y�Ek2�C2&��!iM��CX��!�T��((�>5b�_������*���C�aLt�C�b����l�C�t�C����%2��	�7>-�����T�������!�{�H{��mz7�H8y����K*����[��pk�]�f�O���1���\;�����F�C�z�����&���xHEZO��Q��MT$�]��N�J�!���ML�XrF���C\��-�B������?�������2����!%y�}�I�����D�.zH�X�Gai������Xu��!��������u��TREZt������|H�8�]�f����d�:�J��s�����e����.�?D����=`���.zg���QR4J����.:�������O���[�����,���K\�_B~����1�������G��h���R
�T��Z�yqk��0F�b�1$��O�����u��v~yt�Z��f�_����o=y�-+�������Q���`"%L���d��^��/
����Kh�E�
\�.�	����h���X���]�t(����'��>��������3E���~�����C�1F�O��m�1a'Gld�E�tE��)�?p��S�Ar�������x���s!�pn��R��)��c�C�������DW�_'���"�|��~R���o	vv�.l�#x�1��H�tGG�A�l��U�d����P�����m�aGcWRH0
G.��m��0�"t ��/NO��+��94�{���~��D�U�������T�����=���3B�����������oA�v����'^��?��(O��=!�'B .�IX����6������b�J���8/���]�@<DN1�9X�uQ�T[U|+�����X���8#r`8��\������R���K��X#�����j�J�T1�*�8l8��p����qG�X!�;jTwh&�qw��&�W�g���K;�+WV,�O�r�]o3����5��N������}�������������bt�ol|M������W?~���[�q��a^�eT��(P��k��]~�F���mR�zM���O���f������N���"HW��J�f�U��J�����K��|ro��&{6�x�i��e�[�G��~�w�n�zz���G����������i�Q=����)��G��v����4��xL�����a��U������f��?������0��9[t����Ou�MF�w������
]������}���R�G5�����xu�~����<�}�8%������W����E�����K�cQ�}:�[���aQ����X�
p�G��7�t��wC�rx,������T�k�)���
�d��6��G2�n�w����i����ls(
�Y�R-�B?Z
!�e����T:��j|�n���i����(�wz���@f��L����+�0�����R��F��[&}
�^P�����A�"����
�3�H��z�bz$+a�< ���&�����p����,�S�����;.�Z�:-Ym+0�!-$�F��Q+3lco-Yk������B%��"���EDZxZq���5J�h�z�
B%���K]/���#b�A�<:��^���#>�J���L1�d<#�7�g����qL����W�`�9&%pZ��r}���@b��40M�;k0���Lr� Q!�Z�"7L��R?����1Du�6����A*trUF����*��b\�����~T�+���v/�/|~2�N@��v���%fKq���ey�����h)_
������"�c�p�����v��*��[��YY��R��t��$L.�z;����j�_�z*!�(yq��
n��p|���i;)3����7�%2��h�j"�b��c6f���4{&�(���Pk��,8Y��Ol���I���?G����RI-���U��Y�HN������P��"k�2��Z&�G�d�LiK�������JaQ���*�R�L]���*���%�va�@
\���H;�t�pP83�����O��-X�����2S�J�K,	aL��~d��M�[�:	j57V��:gv�����8��i,�d�����X��X�Nt��cLDX�]�,�|.��k�6=h@0+�`���g&��Y�-qX�U�L�Dq^VW�=&�/T?��)�)1`W��6!H�Z8>�L,H�;<*��R�(&"0�j�z�E�8>�`�W���h����C���,�W5L�	j�E}�i�]�&`�,���'7��I*���F� �������l]�s�.��$Z�2JM��-H��+��2K1C2�}�p^�6���U�$R�D��>Mu��4b���`����*�LA����V���5P�U�����5Y5�C�&	�1�*k0*��?u�$��|X���b��Z!�8�{��_���+�]+sW_AQ�.m�}X+|W}s��]��"x�?����/��\R�(����%�������w#6�U���L#&��M"#jp�!�'a���a��HkWXh!Q�+Q�T\\���.��`���
92a
]��uk������u��c`6+��f��d������<0��(7��F��&Q��@�J?)�R$DL'
z�h^�&�G�
���c��p�}p�a����H(���T<)��+�a�Ob����Y��^�B�	Z�~�-��P��k��'W�����*3B�O�)d%$l$�%�T��J2��e�~|�L�������J�������R���3������X���M�����Z0'��\�'�]sf�i�v�����m�/��������,�Z�FS��pfMt\J*l;�u�e+�u.j_XE����|�S;9���Yf���J s�v�-a&���G{-z��EL��h���J4�c�+��Dq{�e�b�������e�w�.b����fgt���������~��%pU�ew^��*�=vywD�wS�pW��-w���L�+ �����P���]��`K=����a��E7�}c�%���te�f�L��?���QuT�w6�/�~����`V���he�9��j�+'}?Z�e���E�����5�/�F�!��;�5����U��q�qE��8:<�j�d!��H���h�� �nYl�l��s��F����
���\Z���{,Z��"�#){Vx��!|�T�}�h���������kbk��#�������$#���bpS��s�X�]��s\�5|
�����7�h����{I_p'��-�sz,����}����{F�T����\�W%��zI�GQj��a�
���O6o��kvk���������]@��7�����q���J���R�����2�S�+=�~�J(��j�w�	i�oG���\b7��|3{�+�)��
�����j	���j�1������{�d�8�?J�_&M2��l�W ���n�0��
O�u������J�<��R��L����u��a�B��yr�XRPaFUZ�����������N�W,eZ�=�P���I�m��+c�K�6L�Z*����V��(-�eH��������d��4���"��~pq�}:�;��i�..��p��#���4Sn�s����2]��J9�[s������Zn���(��B��������y�7��Cg�4�����G\�Ov�y�?u�,��������,���{�J~R+���p��p8!h�2w+J�b�s���!ih&C�����eH��S�$�#����(#���l).1�{A����9W
e$��H��%��aA��}�\���lk�1a��E���0kF(vL�024�V#����|�2��d���Q�|"�S���87[���.n)������F|Z�L��+���aU�/�4���������������x����L��6V���T��]7�zt����h��`E0�1�vc���d0����`(�8F�YQ�s�T�`7�Up���n��vPQ���9��LV��h��4,HN�X������e�G�Y
Q�3�������}�="��YG]�������*GW�41�$l���,rL��g+�M�PePD.�%�k�P&�b
�,�V��&�,H.\��o`Ru�v��P)#6sJ�T����`�~b�S;��m
�g��W�c�����j�����yt$�Jw�$	br���nj�u~��`��I�a�x��~�<n5�L��+����n���N[��36P�4l��r�����A��%������2�����x���X��j�$�����!���Qm�'������1����\s�]�gE�������W����h���"��
�L�
��y��0mZ�DRb2C|�.?/Z��(������u�)f��Z
���-(��Z:��l5����Pm��e?����L��w���7�����u�������'X9�u��w��m�a�W�$S�N���T(M
��M^b��p�.""��0��%s�4�ko��[c������E.��4r��X�7U��y���"��)y��s�����XK��I��GU7���UD�x4�*����OX����*S'�;�N;R� !��R��[�����������x[L���Y)��s��4I���o�>U��n��N?���n�i�@�UY��38jTS>���<���XB0�\!��KX!�N����;�P�����f��l��l�%G�>��}����,�D�T���%Ek�a�������
w���e��B6�0��M�~Z&wBS!��n�>8|��oe#��#�i��r
�u����w��E�WJe~��w�%x��z��f��U�/�8h�#��3�(	���=���HZ�� I})��O������-U���<��A7��g��Y^5�yp�D��j�Wh�i�;W%�!.�-�|B�Ba���@2��I#2A$?�,b��C���{g��m�M��u��I�-n*;��X,(����6�V������;~xo��w}��Fgj�����<�}�������(�i�-}W;���
�A*gE=.R\-�����d@Ap@r	,�\�(�H��.�]�Y���5������{�
ZN>����O���V���s+�X��a�-����I����	�
�BQ`m�^��b��	/���x\�%�;�����+��[��Yy\�2�<�&sCV�rM�j�{�$>����]��F�����Y\%4����0:4�B��=�S��P+���4t�e�Rv!�/'�pZ��ZBf-�T=��`�?_��Q-R5m��Y������^e?e�F4�\sQ&��0��"�l���j����C�j�s,��D��(��q�~��`�����2�\'���aE�V|��s����������|�u]�tj)(+E���MG�8%�Y
���n��[���C���7��������x�#�L��tA0���g��vS�����c���SL�E��J.��/�0`s�������=����W)�������z��'���s�q)�����6�cgd;��$�2�f&���,IAR 5��$�������D_���S� ��}B������DC���!;(�\(��"O.��5����h,�����Sb�fo3��E>�:M��
�,�0������&}[��G�\m��6�P3RmxC�u��i�%W-��<�V�{~uU�xT�[K`���:�?|���\��uI=>|5�|8�}�x�������������kOn��\e@����+������k-C�����?�#��J��\k���@�G`�x@j��Ze.�0����6�ya��q�o��V�A��������K����������8
���Iw�������iX^5�s��������N]l'�������T?�����f7o=�-(Zp�����HM��"C���:�U���LK��#���.����%��Q�G-�������;�@��q�}���kO/wP���XM���H$��
aJI.�O��4v����`��f&���2�������;����k[�
��J�E�f<����9�>�+�u���|���K�����P���7��������n���|y}y��s����u}k���puY���B����j��C����T���hd������#V�^
Sx���*8��B��sP���U:�&�!�Un{#2F��E������pCS�T����-�Nh���Z��*���Q
��|�@�����u�;7�4�������b���U=.��II-��*��X����,w�`�`����I[����B�/��I��F^��h[<�-�n\���V��U��3XI2)�o����V��TD�H�Q��������ik����d%�"J}p������K��Qy�����^����W��PlC{�����6ol,Z&~���K�mni���@(-`�ja�h��h��p};"�#_OHe�����q�����������}������������
%������W��q��	���
��cp���O���<��a�&,�?m�E�[:�MP���������r��"<)��<��2I�Lr�y,L��)�Gw��u�Yh1�x�IC�I#,������R@oH5��#�o#���H��E �
�oI[!(<:8.��7ZGD���� �(��4m
�i���:DV�t�����;p�6����Ni�w��#�x�~�����2Z~�������mI�p��Qg������=�{Gl���cM��Q�y~���n��+DS5��oG��po�]X�Z����>�[��2��������E,��u_��1�����K*�������j�����6������l"�����@i��H\,9�TV\:��"N0U�Y���T�a��}"��Q��uQ^m,��������b<a�������*b9�����v���[8m��n��L�;��	��w�k�#3�DG������W���N���I�h�@�!I��\�w���.�����6o��+)�$<e&������V�
;�I��&�����x���z��G�/�~v�7�y�
M���t
�IA�X�9�i��
���G���n�MF�S�y[V����m+��j������D��U���3�w�_I�����L�{���Z�kW(���-���GG|y��Dka����t���fa76n��H��C��������7�n��=�����7����><�'����]�6��?f��KI����r\���8��}j�9�X��7�g'����gXdD�|�<+9�<����zsy������$��OO&�pk���0��Z`y��`S�������;��~��8=���/'������`k��-�����O�����vO.��f����������9�����v���7���9{�l����1D�D
Y����3{�����O>a��������|���s���Oo����=� N�@�T��'Eg����D������|~
��������1Z"q}�v~�pp(�[`,z���4����aFsx�������W�����/����#���z
������u+��`�����l������]6�l{�x���������&$��*2{H�
)-�!��Gk�w������;/�<�q����[��/�O��������iA�F�u�������
r�a>{y0���	8�v~pQ��G'���x���<�Wp�����r/D&y��F	�������[��X���^��s/N?���O�?�O��_�zS��q/�:d]
���F�bu��%��y{z��
h9`K$6��rU�����[�m>���S�s�'e>mq�as���xqI�����'�����w�?����U�������s`)��S��������a�������������|�����������l�i��~6��G��|/h�{)��)���)(��7��sJ~�:�����=|�������_nz����{���������}���D���R8(�'����>n�������s��G��9�$��x�`U�k!a�=0n_���Oo���E�?>L�.���S�,��k������c�������B�3w�jrB�`g������7���w����6�*Z���a��Jd[%v�N����@�{v������C�3�QK��3K���Tm0bC�����=���������|~|����T�x0�;GV�y��<@���mR&���X]z�x�] �7'��S8�A�������%��!��-�)�C������;;������^�~�������o��<�|�w����b�2�����������uz���m����:^�r6����0�!X�N�D#>������z�FB�CS�iF�n���;o�>H��x��u�P$]��/��wv���A=r�+�p
D]��@��{{�sR�D�j�����_?��=>3�����{O^o�������>������B'�,�
�cd�p|�<�wHl���`�H��H=�^���%��8x���8����n����w�nl�K�9=�sC	(j�:��g��������T7w�?9,�Zg���_5�L�<�,�(1=�
462��g���������}zz�o���{g���x|��������Z�"���g��qT���B&T�[�H��)����O9Hv �>��
!�j�M��bl0��IJ�������?l=�����=�����|u;����1����ulo~1����,<s�2�*\�fP�SU:{���[�%.��^H��}(��[��iT�>��xr�������;������������Do��.�p��B�����������o2+��w�2a�i�CD<.#�A�Dx���������-�RB��+�L�6g��Bg�*��R�V4Vx(P$lPI0����>�Oz�5������W�on����g����x�n��g�O��/��GA=�@����j�,��� V$h	P]w
��C)t�"��U�+JPar_��[��;�_��=�O��|q�~��rh�?�����q�5�=
h��P���F�Z.G����"-g7�r]��i��2��E2�gx�O.����O���T����G�So+�,@c��
����j.�OIMP�pM�8E�i��������7�
}�
�,8��m��-A��$���T������[��2���hdz��.���Ii���o�w�����[��o���|v�����w�O�N^?��5���K������#;�p8D'5�R��Ai����� $��q�ObiJ'X�T;R����6h��	�p���B3�1��g'�/������o=5�l�{oo$Y�,�7��+!���L������
]a���ci��7�x4�$X�����%l��9���>>xA_��������P���^��
�n����\u��Rp2��~1?��d��%$�|f�6q���c��&R�������7���{��o�:�����;/~9�����G��
{��i�
���
p�,/fwN�EE�"Ks;0(�Q������P��d$f��O E;��H�h��������}�^���������S��(4���'Q�g�����'�8�Gh�����b#�IQ7-~��bH��}8z�����wN>>�������.�G�b��u��:��x��N2����f��,N�s0w:�O�O���~~���_���E�'L��>��]�rI��N�n�x��w�����������~*���9��D0 ��uX���N��/usS <A`�Z�u�%7���J�N���kT�����w�O�������_�+�j����t����7������_._��+�
��&
�����B���*�zC�8�~s1?��,���A�6#����`���f�#������O������w��9<y�s���/w�����_� S���u.���76��
WuSQ*R�+��q�2��'!n ���8�H��z��:�p�I
�������_�_���������s�3'k�~�x����O��-�x\g��Sk��0�����U�*gU���|��4����Ot����o2�rXQ=�=�sa�S�g������O�>���D���u�����_@��{x����[�7��S��+'L��3_|��69�EB:�i����{!�Rv�������C��[S���������=�"������{��?x��}���}x�L�o�z��)�s.�s�mG�68�e:E�+�����A�cf��z�o<��r��9��(�5�=
j��b�+E;&sxr�NcA�G'g;����<��G�:y{05�qo�������y������
�}�<T
]B�+e�L�(#��H05��?x���|k�@��6B0*h����7�X,j���:�����#��A���O���������;Pn-���E-����D�k�y��Y�d�P��c\M*.��{|<N
�|�|W��/��'jv�^��7��'g��+�;S0��b�����r�e�U3��
4�����+"��:�F/�H/�+1=�5�%���������o�����?\~��l\������vJv���(�-���a(����S��G���J����X��(L=q�O�Ls��m��w�����'o�>�=8�[�m��V���ei<����`���|�]��������-��*D{4H�j�� ��7?�><}������������9}�����O��G]��u�Z:���E`<�V/������<��D���:���$O��<��x��h��:8�x�����|g�����y��������?��<��5�Q�����=M���(���oF��$��&x����b�=�,�0j�t������k&?��s�{����7���z�������yrQ>
���.��_CV������+re������1=d�3��P��G�j|�/t����m��>x/���c?��*ZL�Z�t���7�������A��a6�|
�yp��v��|#T��� F������A	���O�7z�6�}���G�����7��^�{��?�����N����Y�e�r��Y�[�m��U�8�<�8p��\�(��H {�����`dp������\_�=�c����w�o�����C�C"�l~�e������'�=��|x��:����jf��:�QB�T�J����MnPD�:�Z;���@�L�1�������a�D7i
]��h�p������������7����/o����}���{T���D�K���V�����y��a��#�0&�\Ho��a���`���1���<�q�����kw���Wo�J�k��n��OY-���@�g\������4������I�0��+���aR����>�}���Og��������O~�y�����r���=����7�19�}_��'�m������J�Gq��~Tm��_$v��BF��E�����`����������w�o��l}~������o�zy[��4z%P����@�	�\�4z'k��%h�p�@"9x:#7�VC�z�l��$�f
����OG���N������(&�0�n_g�y[j�$�6�I+����x`b���tT	�7���~QZqg��A,>�G����>��y��}>����������������F���-
X�D��������6H����~�(��~P�z\E�j:�j������T<�w����y<�����On�=��ue���u�G�8=�<�M���g����Cg2�0�5,})���n���/?�=����|����GBm=�d�/._������7��~A+��t���t/p�ytTI���������`I�#qG@�n�	|�Oz��n]���������?n<"���;�AL�~�n|0]88rI�A2�������(������0�?=���-����j�IP����+�>�������
����~��|�y���JnMq�P�%v���e�!'t�~�P�#�x��|���i^q�~�����Ok�o
�����i]Y����9���"��Th�|����H������Io �`�G%��E_<;:�����`���W�7�w�_�_�6O��}���������g_r����$(���]�LS�9��t�"�)���3e��l�v����/���O�_�%�8S����Xs_\FTm����T�g�oO.C���R���kip��Z�H���*���+pj����>|�`s.>�w'������������g���T0.�c����"
+�c(�S��
��4��2�#2G����g�I��H��g�����0��k�>�6�czc�C�qHB������/���RU	���uFt�$JU��e���mn����TOa#������VZ�G��������A���w��GB8�Um�Otd�wzH���V�|������
�o�aOUJr���(��7��������Q���pk0�nI��L��Ao�|�?�f����O8(�F>G7�k'>�g��T�U������s>�km��_>�vZ��\���Z�+�WZO�������Og!��
7{��Q$Lb�F���7�;��.T����F5��^�F�,��n�������R��U�e���H�XR�\6p'uh�84/Qzw|�����������~�+������O�*�H!-s�9�����:j��d$���QG_�l-�?�mg�R  *�No�q!�CS���-E�Sy�n�+������������$v�����-�����5���KJ8�>w���]���!��f�Hhn�V���*����Z�*'S�� &4/zdv��`c{c�=���'L�u���������-��-&o/�U���f��#�S��p�g��pq��y$�*��
�
#�+��I�`8�=�<�}D�K�.=��������6{X�������o�P���v��P�3� ����m=\��$���f�!>�{��!��O�H~~�������H���v{G�lx���gK�(�H+�R�����"�3}����V

�a(9?������U%�Iq�Ay���\������s���=�i��G�v��wR���skV�%��"Y�x��V�\��Tbj�BY� fv����I��^e���0�������0�r�\��y<@��`ri�v0<M�>G��h=�*)$o�y��9�e���~�j�tocwSm/}Z����xq����o��%�QD�II�n?C�\C�{��=�c�}��"G%�MtJ#T�J�������������������:~<+C��F��fI�d�+�v���y;�J/��.���K�Zdq	���R�T��o_���#t��{��~�����M�Oo����QCJ�ua�d��p�}����R���f�-�q��
��W�{q���a������������������K��
���7��s��G�
t�ob�q����~��X*3�e��n��>�S-q�Y!�`a�O���q�����7^kv��K������[��
3��PF��:������gg���F����p�� 7���v<�'�bR�vh���������~�}��:��wup�.�H�gb�	
�bU�8�*��[���l�^B�vN��eE'(l�Yn�����>�����{o���L�[_���~�<tH	�v���'F��pfBG��(��<�a�4��j���`�6�����������y<*�u>�
m5������$�2&�A��h�)�+No����h���-���?V�vM�����>���,���%zuz������-�9P��tt�����;p`(�����
K���)��n��jl&��/Y�cVX���(��	� ��+h6���*r������`����Zt+��� �O+�B����;��#�dox^���	^���UQ�D��9����������n��^dG����~Q5a��)d�)	FxX� $m)T���2����0���S�]��E����_z������<C���z�>�]����'y�?>\�u�h�-���F��[�	~$��B�!I	�C#���������d>��_�E�4�h�9��
�O�O�+�����$�{�F"����2f3�C�'SL���,���b)+�B�(S"��'{�S\Hp��F���r?�B(@X���,��]���e<aTj�AE��������d�~����������o�F1�6����������9������a���1&(��0�������:��q71��^���.����U��:*v�3���J����X,���u��.��/���
������x���Rf�/�L?.�a[����
~��m;�������B��b�����t�E�	������������}CaW��<t��Tk*[��w3
��`!����d��A���A4Y��M��z����4Y����-R��+�����h��$KI�r^M�_�@��V�L^��1I�����	
���{�;�!�P�P�:DT�u
?�!!i	V
	l���;2u�������__�������W��m/������������������p���c&�E~����U%�����"3�����l�9Gh���RD�<�%e?p���^f��lK�u���Q+�?S4�#!�L����v)K���uW�P�sg�,�)�@�
��;�+MP%����A�al�S���i36�}�Z�b�c�Wv|��Q��� m�Fi	���?!,�g�7F}!��Wk���$����}M�;E��nv
�U�(����M@v��"��&X�
0ez���b#������uQK���b���@:�k���QgS�ae6�6���L���	����X�hn/��,�d���n������r��;z�|�Y88����m!��1�u�z~<�����j?[����[�1\[&�7�������\cK��Cim9�l�gc#j��i
�qC�����w������K��"���F��sg��@�d�5Z�N��r�&=V�YN(��H�U[�6Y�"���pTK����Qg�\7��t��/b��7�7��c�B�z��to�r-s#���K����Ta��p8�,Q�Is2����ss��]B1���0F�����Q���&�`�U}�}�Znfx������<y�7���e��w��5��'���d��6���~�3I(��
�}m4$5��8h����W}�t�]v�^��;�n����kly�G)_����,C:R3 =F}���WD�-@x��&�������#�J*V���/x���������v�'�(�
Fl�Io|n�r�������2i���;%Zh�E#,��*0?Imm�L`�KFu���@�E��X���&(�F������h���*y��m`�F���CUd�ZpJC��M�=E@p�VS^�����*��t�Fh3H*��w`��>������!���QGy�n2��	�]�On�#o�g�Wf���{���������l|��%9Sg������J~��YY���UykU�������;���\e
��f��FVgK�C#��1�7G���l��2#����p`�H�	U{Xs�d�Otn+e�Ab��2;�y�F
FJ
�?V��K��S�sf��+o"
2�9U�*��Xy����=���>���v���H�.)PB�6��z�m.���[B(�ws��[y���[����7�W��4u�Q^CH��^���1&u�h�?�p�_"�E�����_�R�]�$B��/!�}�Qy��1�YX0�,�`���c������2�����V��qwY�$��?��x�������Q/�p�2l�������V{��������iI�!��tj�pB���AY&��Z��i6L M���cp�r���%	�d�Z�c��dJ������waD��V��8F�����U>��pH�g���T���j	i��8��Q6A�4�+4�c��NN�Pz����P��BV�����#�V
:��7��~��
�������d�51(h:3�b�m�Y����F��`?D�rV��;��@H����Mp��1�`�	%�<�I���;S'�/P1#B��P�4&��"1GK��L���I�2�����,y��5�hIDC�,���[���Ik����s,P�@�B�M(*���`�C�!B��M�54���U9�����rV��k#6� ����|���l�il6��M�h��`���q#�2*���(�E������q���
F����o�����j	����z�-n-�w�L^�};1G��������\Y^?��v�7���V�~�1�Yb��T/ny�����-�ZD�(ia�b�E�?�Om%������0��(3�S�8���B1�Fi7�@"0���G�f�4�N: v������|<��bBK�����9��Qe����t)LW���t%��.i��p�4pNI�����t���Hz�,����/I�wd���{Ss�|�kr)\�8�Z�9?U��)���.�B��������Y��uM��O�(�����p��3<���?r����2������u1���x�?($a�+��
��jin�u�������!�)�e{�������{1���9�{7b��wE:������dh$����kK��E\�����D�4��-�D����H�/��R�"!qH��]'�H<'?�x��}��
��+O�bgJn�����^ �����������t�����������N����d-X1������p��o��gX:���8��H����-Fmz�c�0��/�����?�_���1~��od�%AR����������$��&�^L��rK���^��������w:�Oz�]�
��rI�7^������/��V9G�]h(�D�H���73��O	���y,�")�h����g�R�/X�j�C>m��p�<4Y����t����>9��05�-
����
�:�px�����>��@�^d���&�n���*����	0������
V)
K�5�����
-�T�2P����\�p�nNs�,��f�T����&���$B\<cDk�fQ��(6���9��,8V���Dn���DI�dX�'G@��0Z���p(��I����Z�Fw��M���4
��yLR���S��M��	!�| 3���2���������g�Q�������CU��D
f�jz�f>J�e7��|JhM����*�=�����PA�f���~�Sp��}��7���{?��Oy��� O*�psv>��}8&]st�_�{����TGw�L���X����x���~q�g�����?�mz�����Zp���)��Yl�>������  J��_<n���dC�)q�`J�+����;�~{e�W0��
�WcVM��VL���;)s��\����z���H����
���f��sfH��r�	��4�ZF�JPZ7r�z��5B#��Y�9a�s=-���Dp}8�,�P@�(��ss�����)���)��!>X ������Nxz����='�Xjl��	v����0eq5H�����}�&T�����RM�EB�O{c�1_F\$�]����.�t�!���}w0�O��VK��/'����� ��u�<7��Iwr���Etj�����������I�?��p��s����<[1�0�8f�5����Y��~q������������gY^�X�������
���V.vo�`_�Q�*�_�j��+��5�q����K�?������[_���Q�=E�����5���u���@�w�8�lf	�p���!qfU�u�p�
�q5�$��2n#��<�H�'Z��)�be��Mf�V+y����j�`e�]$�d���:�T��(��|�����f&���	��y�qF��hY��K�J���W����a.������\�Q�e�������(���?����hwV�W���+�MC��o4��.����A��v����:�����4���$����������GA��t����+���h�#H*Y�d$��gH���Ea��^L9�&6l��f[r�k\���BTi����N�����Bn����7'aW�7��\�B3I�R	�TeHRMs}e�p��<~,���:�(�;9��)����0SZ@y>A���w@G�b�����������n�������0w���I%����'�����EP���/����
�*�6Uv�)FFq����|�L��z�I�jzv2�\���������|d�$W�P��s�9����k�0	�33�����7��i�r����V���JW5`IF���Y�D�t���*��Y�s2*�8�9	��V���^�t���u���"�E/�������`�x�0C�g�~��$������������N-��t�iAdHT��J�����/�����"�1��0�|�D
�7.]��$�<�>�R��Y����E`e.�[R5T���RW��0��J�Cj�Xj��[�>�<��G���C6L�.\�5��up�XzP^1^G�V9�&E����]_.
�AWB��-������������r>�Wl X��8|?��v�$Bb;c;�~1�\[�%�Jr^���*���l��do���J`��z�f���E�j�h�SN��(���C_��+����sj*��@�"��G���A��S8Ojd`��Dt_9ub��ZW�9�&J���p�:b#��4�L�����j��Re`��+%��_�q���]�W��D�2�z��Y	�w,�!-��4S�5�Y����$����KXqp�����e$G���H������ o������]�C8uN=��!E�?����e���/\(}��Y�_�~v��S��_�^�������:�D��{����T�s����`+7�\��0���F����h�w��@���N1�*q�������o�j�q�o�Qmq��Ga��~!����I(iv��<xQ�51�0b����M���1��@��5L&���]�S�/�4^�WOY��5i\�� H���-1�7;LR'��F��(f�U
z�����QQ"��Yw�T��5X��n�(U�[�/=��[��6��m�a�����ej�0C��>Zk���MXr+�E��p�(L�w��/����c�D���|����\�����V�>g
x��JW]�99�{y���B��i ���H%���p)��pl�9�A�n�Z��h�=��\��~�M�L��LNW���z��w=D�vm�H7����6+T�[���W��������
J+�����^���{�}�����6f�!$r����e�W�G\�$����X���	;����&��9i����UC�E8�60,+��-���<��n�oym�bsU��$�r�8��TI�g{�����!�(�Bj�x�B��S������N����/^?���8�����-�x�I��`��._v]�s5����������G7��OT�����W�����y(}({�)Lk��,�����C�������������F��?����L���]C�6�[�Ev�#���_2M5�0�c��u_���������
I&��\���X�t�|���w�y��e���_�!3��Z�S��9�����5��-�1�N��v��ZN+4i�h�a�k>U#Gvb�S�W�����K����$-���S-��6�9��9Q������$H!r�&�8�d^�*����L6D���O&�
���crEu�S��,�Zq�����3l�����hV|�B4�pZ���0��������M����BY�|OLqE5�+��1X�;,��+,0�sL7�.58�+w�&�ig�k� �s�������*QO���_��2��X� �Q;&��;i<���=��V]3���r���H|�"A��A�����E�����f�d�����1������6���\�+X8R)	I,�%j����=�.��8��i
`d0����-��*�q�i!���u��� �;r-Uo.�������A��n��������e��4f��[���`h�+yc�C�L"�k)0����K[�-
>��5~6[:�u�]2������a~d��##HT"�Y,�(E��_AZ<#�J�VV�}�;�$�)
m�[~,����

�dh��A4W�(��������fU�Bx�1�\I#�ReGq�j
+WA�il*���8���EU\)�;��8��H��5O
������b�T�F�*N��%�d�JZn���WIh����N4��5�[��a���t�F�&�d4�Hu>�r��FU��T��I�/>u�(VT�u�3DQvo����{�R�n��OX��r-��7D�_CTLS:�j3�p�����"y&:�L�[��#X]	��;xVlC�y��,�m��u�8\��_���?���7��������jk�� �Yg�A"�>f�"�u���/*�/���Z� w���&O-�<)������nv��J�Q7�v�� ��RkR
�G2�`b��.���!h�#+�!��5�R���-!z`���`7���vnA�<L<K�������c�0�-���ERK��'���~�J�:T��(=�El�s�I_��E�r��]�4R��>�!v�\�F����m�����������a\N�-���Z���W��I���
R����L[��������4�<^�(�G��>�hN����8g5����`�j"�����DX��T���	�h�O�$z�rK�2J����9�;��B�lV�w%,C�$\�����������O��y
�	+F
K��G��
�����x���_=��<F�[������0���Vq��f�w����r��-91��$\������0H}�g��N+�v*��M�pq�6��m�hS���JI���	w�H�N��W�����9���d������J�f�0d����0��C��v����34�[D	��c�*���NZ������4I�?C%���rT��;�7��mDU+u
i�����z���Tb��@��g����\�|+6��X5y���������A�		��n�6��rw�i��;{�e�Md_���,���
+�x�I�B��}���Ww�^xg��T�3��F
[c{�H;���PQa�V�W`
��c�A����e<X��7��40ie���u��/�!�C;��}/������s��W�K�q�)��9��t�������\:NN�6�w�5�_N��AcK���<+W5)W�PR&CZ.�66��A���O�_cD���}�#�����YS�Il��]YW��7�|����������X�!�Vh��3���:�K���Y�,�>c>��p�X��+��<��������E�y�rg2|�sx�$�m�(��)��������H��!�q�^��6�R!�.��{�~QOfU�WK
V;�f�q��������n��xHE��8�'Y��VR4���7��O��(I&���z�_������7�x�
�"��v� 6�� +j��]LG�-��[1����m���0#�a��c�N�0�����R����7taqx��+F�,�������t����`��5�(��x�c�~|���t��*���0X#MJBX�<�\��K���F`9m����r�=Y%��4���A'[c��|����(\�%��Hp��(���2�T�7�|��O�PS;-��z<Zma�B�E��1��sL���Z�9���V�`�����!j�q���)��X.�y�?XzMzJ�$Hz.{�Oxy%���j[g%�E�X��X��$]�K1��g��3 IE���.`1/�� �6���XG�vC�HC�N�����0k����@�O#qB��	"��-��Y����wpe��i}�tY�$5�&2�0����b5�J
M�O�j�f'2�����#�
J����)��3��e�v��uAfK�R���V#�6�k�^�S��^JB#�[�Wj��	��TFB<{�KKl<�b�t:���(*z.{��'��Z�d�h����O�Do�����b��d�K���#��#�� n�;���
��*H-���X�����F��}�UW�c��'`W��4����V���fH�� ��~�c1@�)�%�M���B���u\(s�f��J����)�%�M���Um��c��G�mF?��HI���LQ���gE���C9����}��,��P����1a��xR���9�&�:[�~�V�a���=�I�������_�g��]jL�	��	������q0L�B���B��i��A���rF���k7`��R���RFH`�N����(��b�p�k60��;e��	)�
3���e�e�)�'�f(o�)��Q����\J}�e������T�mS�O��!�q�?AY���c��N���v]Y�G��GZ���a�R�0B�����#�"1����d#���-�>W�G�115�I���"n��A�]�55I��>�5Rvv�'D�w�8s
���s���I<q��^���u��ht�7�LO�.";C?��x#� 0��Pg����No���N��D�"�L���X�x� �i��R������m����E�WB{�qos�T�j?-�_	9aM�H�~�"�x+���Kq;*r)1���U�_�&�� ����2�Z�����?�����R��2:��q�p�!`���P���G�n$f5�u
1�3�~��[)8Ep��;��q����*������JA�;V��Tc�;�����&�y�)�QA����7Q0�%�sJ�%��Y!gZ_OA��Y_Z9+�Laj�<r48��9�X�r��bu�����oZ$�����I�o��!g&��$O��2���,��a}>�f��iK�_�4��e/,�uai�����F�M�Is.�9mr#�����oO6x�Pf����f��c�fs_��S9:vx-p?�v)�Hk2��&{��
�e<��0��5�Utl2���
����ua�&9��=k,e��pA�Y�AG��]E��i'w����b�V��8�������f��� ���'.���k[�\s�gK�n�>���z.��[��+�����B���(��..��
�q�g
���C����+���E���-c�!�'z��_�����	��y����[��8_�v5�
�K��+v��:���)��$RS����8���;���Q���KYC�
5�TM��N�W���)�jrO'��o�mPsuK��^'��
h<�<3��]���5���7�d�4��K�u�������?~���O>����g����O�y�����l|B'�63���L.�s����oi��>����2�!���R��I}����k������ou�Uc'����~����oo�?��M�N����Sp��R&&��VgN9�����_������2ud����x>���'����������/n�|{���3�n?b>�����^au[0<8?���$Hy)��.��TD�'���s2���y���&���u�(1�mU9[(i�h��V�M>���O_�����9nD���;F����k��dMe,�S�h���Gqb�����*�\��~�MNAJ65�lX���IJ�E�?�3�����b<�Rm�C�o��z�Ce1�x�uS�kj����]�=�#���Qq�����t��.f|�sL��/��Lfe3�.�d68��2n3S*����r��T���,���C��h�@�,�_9G�����%;�L���&F4���s���G����5��IwyE��ZE�%S�D���P^��er��5m�E�S�w�N�<���!�j�]^Ss��.��(�O�[x�� ��<�k7�Rn���AN�� ��D��!6�(~��T�va���H�������J$\XG0'���+��	��(K��e���t���$rr���)e�b5��m��������.=�5�j����;_z�OZ-e����M��lF��D�x�c���5S-r~=�(��'y<�+������B^O�
x�W��y|�W���>�+�Y^��
x�W�[�$ZI^!_�sXe�~�D������=9@�����3%8k�*�9�������D}�qs�,���-`l��:6���!�X?g���la���:��\#G��m������ &3#�T�O	.f�u��e�ES�������N"�
D2��T ��@��N��
�t*��@�S��N"�
D>�t*���t�$�d��2J��z0U�����F&�����)�F����^������;���w�^������}hRRB'd<@2�h�
�)U�AMt��G�t0��Hg���5�s�*v��F
�`�T���U!w����)jh�T�d�������I�BSX�K�R_h�s��
!E�&�t�������z��������}�!!�~�g��,g�.��,�n�&���e�&m`R_��"g��&:�I;pnb��"v�&��[�I'�d��\��J�wA�p�`��7�J"����o��N��p��j�&����7�k>���&x�b��dr�qR#���M�Z��KJ��JR���Z��i�&��������.��h.�#3��:*��.�	�&;�I��Y�	0��d��T>�����Yd7I�6Mob\/�3�f�4��S��Q��d,]|w&���������'Y��-�$[|�g�[��.w&`o��Lo�:�����Pfq<���M0)���d%r�,������M�s�&��]�I8��p�����M86�~2����
��$,�f��&��X�	�`�&e����|��M:�z���f���d���n2��2��k�CO~��g��&���ez��b����czf�/��9_��bydJ���MZ�}�7)�����/��t�v����d�Zq��1��1��!�����^Z���V����d]�aLo20�/6�p�<���fQ;�J��8�����^4�e��_�	c�o�l�E��]>�e7��Yx��H�|wKw�ER���}������������LJ_�Wu_��r9AN.�{K�N��
�Ah%�%O�u_�5gM���@63����;����H'9=y���g��)9
���������?���
|py8��xH�.��g{R��3�_�|��Gt���Q��+G�e���%�p�������&�i�F����5��/�)�T�4�m�� ������U���2y�R�_�����A|��bN��a��|C�$�v��OIG�������x����b���ic�]�#���g����jV����q�W}lR��/)����Iy���b�����P����%>�o�]j��d�rS�w��/56��s�B�������W��aL�M�[�a@�����KL8%��[ ^|������p�����:vZ5"n��r��h�X7�h\X���qq
?��"U�V���D�r�[����YYJE#3,[
c��������[g� c����O`]">�����I�p��3S�Gf����TR��`�5�/�g��BM�
�
6RZ�M�Z3��g�5���P�V<��I���9���D���<����N�������J����H/GJp�e%�Z0ST�ke�>l
`�:5�Q4�����������wR+	�G��q��
���td�4J����A
���H�r�&��������Uc�*L�����
6Y��3Z_g���Zq����o_N����)@�WF��$)��?�P�t9O��t�I�J�EI���LXV��u���i�v�I_I�pJ��]\��U��N�h&�2}_�1l�������N�AN��E�`J!x�����;���4n����6V�T'<t���tO�3�E�]f�NX23�������{R.��Y$���N���j#V3LT�.-��8��$���	cZ�$�s�����s���������:f�sX��*��S����\2XjP�#[�����u�f��S`4�P�b��}���;����T<������Q������bk�T������s�!��R
�W��n�k����	$�T��A��a��^dxgW�7!��(��_?��f)��VCv�'ZR0�A;����F����6��F��Z�.c�����nP�����*e�,m��e	�q�m
��������/�/:���!�!��`vy��������{+}H�m���`�p�������"I7.tM�l/K�kR������O���_�������aR���4��s��g�bJu�_����q�Zu��i�0v���7�N�K�k���0u��Uwy�Vr8g�Ls}\�#T���0X�c�zH
r�5H������<�C��r
�]�z+a�i�����,����7��K��M3���|����B���Stn�?�38��l�d��4���\	�	,���d��vB�&���6��@��[B�9Z��Sx�o������(Z�������4�������Z�w.�,A�bnN�z4L�����j��8�������B����
����;e�	\����Q�a��eg��:�����>�CV���gg������k�u��e�
Y�.���UW'����3$xO�"�8�RpZ����k�UI��������F?�����n�A��V����~����~��b���/��Z�e6���n���A���}
��6����kz�|���&pi����>�6;�>y�	i8���|�sY�>�������3A��>q��!qtj��/o�v��G����2^�����4v�?������\wF����3���(��oa��b��Oi�xKIM9����z�)F_��(�����_�0P�bf�O��/J��������C����C�U,}Q9����\S�	�`����.Yl�Jam:��b0���h�:��9��J����9��M��'2-���"����_=��EN1M�
�E���[��"+Q�������	����Cu���]��9O�����������1B+�4�m��c�����\�"99��
�]Lw�7�Iv�8��#-�N5^��EUV��0��8
���0:%�]����~��Z�Ki��_t4	��L&F$���b�V�o��WK�������	r�.
rR����*��_��xA���~����O�uSI�
�4�������U���V�;��`�Y�����l��y�������>z>y��lI��������,�����.�i�*���/�z����{����5���W�~|����[b7�+5��`�4_	-�kW�V�}�����0A�X+m�����A����v�-�8X��_�~
V�B�Y�����j�-��%W;
��x��6l�V,�����X�kM�pWadBq��A*U�0z�P�����Mf���!Z�X�O:+!A�!�\rpo���H��z�UP��n���
�[���*Na�&*>2����D����ph�$�W���%�l��S��0�q��
��!�
Y��gLM+���6�>�@��SZ��O��3�Q��lMk��H0�9�'Yc��������i�6��J��y9dXm?b��)a������g��\5c:��a�*bH��Xb
B�����9�m��J9<"�����4���C~�LT��R�����`��;�����8�r�d�h&1.F�`��N��)�mJk
�����Q�R�3�
H���D4�����1Uc��&�'_�LS�*u�G���7�B
e�>e���O�������1�������Z����z�Lx<d��a3��1������	���7
�rBJ������P{���9�v'�M�����Y��������Gev*)c3G�\p ��I��`-����)
�.�!(R
��f�����(L��2���9�#�{S�����i��)�c)��X;��+�2Ix�wKy�9TZ)�����q����������r���p�D�Y��\�[KG��;+�U�dV��p���������u]%W:��VR�LY�&�q�kxg��f��_q���ZX"c���h������/���HR��4j��Y��~[�%?|S��Z�A�A��7���;
a�I)���`�}�e�g��`
�c��Bq8�k��9�G��C#��.)�����2���|�p�j��2��_�[+��Z�?����o�?���5�����#@���E�c��Q�����p���N���������V�2l��v�]�E�DL����5UjM��ZB�5�v8Bi|��4��ZK?V+�jN�4��\s����@�t��`��Y�
e'�
8N*2l9��+���dH��p�/���,����xA��,C����������*����P�r�{R�2mA_�2'xI>L�]�y\�2��Q����
���*���{�h`�UF����K*,���w�l�/�����	�J�<_����0S�z9p�\���:�c�D�W�L�����Q�5�����ul�H��SM�P���O<8L��
�:>S�s�5�s�.��c�c*/8���z>�fq���|��7�����a��5U��iz1��o��)=��������
kR����k9�����=����Yt����q��z����]�R��b-+\�E�
�������N�T���"�a���Hi
�An!��{TN,�G7���a7H���R�5 �L�`�9����]?�����r�;���b=
�E��������Y0b}�;�\�b"�L�E���F��P������',jt_h��AVcwH��?x�����y����O�����w�<y����'�5�RK��<��|Z�}��C�e����z�
wg9MV���X�w�1}�0l����(P��U��00�^������]X0"�Q��Q�H�!t����7/���� �����
	w|7"}n����=��&5&5��4�&"���r�^�o�U��G�o�~9��%��4�������g�#	o��2��������~�"\����'t���9H�e7���C?��p:H�e@ku����w>H�nD����2�y��(.��	D�]+[ �&1"���S�����N����L�R@��m�xG����1�b��Y��q�<�� ET>Y���Fe��n�s��7�x�o���Mp�����"3��}��Y����9�y��7�����lq�'���<��}p�����(^���S��@>Ny,C���]��z<�|���8�P�������z�7�������������������'~��^j��R����N�^����>;�0;����A�;G��Ta�f����\���g-�����\�c	����r��*z�������B�Z
F�;�
����2��Y����-RBO*��^�Jg��0������T�Q&yL�=�U�<:�}����d���t��X��s����5�}����R�q����(���c8���NM�D���VD�*R��U�5n(p,��Nc��S�8�6�/��,��N�k+���R���E�����g�����3[gUQj5�v3�s�B�\Jd�h
�A��A���~�]��5����|�U����Xc5�nU+��p�������k���^?F�?}���_�^��������\=&�G��z�����)k?���G��x������@J���y���0���������IYR8��E�xh0:�q��!@���f����3~�O��b��w�*Jf��I)T�ZB��&qY�����f�����Z_�NHM�i�����g�yV�D ,�dO�����FX��eL��4�M��g�X_���:g&4��SL�b$t|_�O��G�����J��N�A����U����(�@���U���PP��-kR�h�i��v������i��*$^��J��jI���W��S��)n� B�������	T�:��io�<���#xV{��
���SJ�.�����U'�E����RTxu�CJ;����J�jI���f�t�C}��o���������������t<+\~L�)�9V`�hQwu�nL�3�	9��8�����R��xu�v����������oA�.��
��>�����Y��w����v5����WWj�|���}��S0Y���q�J)�����C\JGk�u��(�zuR��V��:o�my�sI��H��9�R�
_�k�s5�8�%e���+L5��k�*�?����[������8�;E�T�g|{��\���I�g�����95�����i���������!���"oU���j��L&�����$H��]N+;�i��� ���iYR�9�����/=���������4��������Ls1�~�=���������%�������A���j���,i��U7���+����G��F��51�T�f]��[�2����R���+������==�wZ:�Y	_���8%V/�8�Q�����TNF��}�����-z���m�����z|���Tf��Q�������(����6%�/Yaj������{]sm�^�jG�)��a%���C{�#�n+#Z�{PFDj*�`9-��rTJe~#��L���4�J�Y��k���3sVi�G5B��>��[E,�d�E�n{I9�q[k����//[Lg���l��Q!=�	"c��C-V���b���|�����,� �f$�
lw���3���0�m��Y����s����c��['��/����_��uyv��@�����gTJ�����AA*H$'��E$8�����fci�SA)Qa
M�,�D�>��Aea�<�B+�JT��Js@(TAQP�����)��k�
�
2���8�7�.D�X'��T��T�X����������]��y�4���M{\q����Y�����05�~_~+�9
m�n��"�S��\<�1*Wc<���7#G��h������,o0�X�."#I�v��i0��uc���`>m0��R�9�T�Qa���1������S V����0�^��#���[�L���w���K�E_��1K:����4#s3}�,vz��sD��b�y���������w�����g�Blx��������O��=,6}j�L��|H��XC�[�kX����:&7(��������W.LR��4Z��w�+,������07��.'��K���6�y�/�i��o[0pt?���.�Vn0�l���s�'�X���V���U�����Q�������c�����\99t�����Yvp]�
���O���u'T��������sJV��V�#`*�D��f������5k���'��F��0~�9
-��@p�Z�<��������u��-���@0�W��-��P����
&�}-/�Z�Z�R���
`Jc�P",����D,�O70���>�h�)_~
�������L����?~��O>�*6����O��G����� �4�d!��j���O1��(}�s
���|��'�����?���(�5Bt�x��$�D�g��!H^�a����u��TL|U����aJN���L�MZ�����<x���j�������\������}��n=��i�bq�.Kw����Sk����������dZ�����o�9V������fT	N,O�k�����������xY�x����<H��k�������KR�������.��N����fI��X!�����A�Q��$�+x0}��,kU:F�k2��[�X�����.�|��Fs����<
'(=<�Bs����N����=��	��\0����[E\���N�1�d�>b�����d8�]g����j��3�p'�x�t�5�:�%��R
�_"�����0A���yV��M��m�6�jz�b�U���8�l8�.��,a���`t�c���al.�O��`�{��uRl:}�7|{��i�fD^k^������y�T8�;i:�(������
_������s�{�����!R�l�#O���P�h�	��80vX���<+V0��a�p���\��QK�7����L`��R2��X�����K��4r����W��W2���h+�=���^��G�e{�Xn��4n��c=V��d>My�R��X���������b
���n�SN�=d���WG��������������9�L�M�W� s�������$���H�����*(�(U�R���Q)�$u��j��Re`e��	9�T�RJf�����k�Teq��%&FkK�����{�
�����3�r������1�����9e�Y
�lbr�7��A�����Qh�PQ��"T���e��>��zv��Q��y��z������^\��q���D������`o���D�I�=��{���*������rP������~��������|o���o�2�r���*�-��_E����]I�RX�3	+u��0L������z�CA��r��-=�'���Zz�������&�[p���`4��
�-#��������X����;>[����'�V�K�~"l��ql3Hm�r����]x��z�!�y
X�X+H�V����A�� �)�O���6�5g|e��B��rX�������s�^�z�SA�ee\�t�������F��@'>
T`	��!1�Hf��	��s><����{�S�jw�d����B�V���z�,��r��]��1�Ju�Zseg}�b�j�s_�%������R�4���E���v���W�G\����v��XK]:ag���c���4L8~>X��;8����p l`��.$w�����Q�[�`�z��c6���'�������J�l9��/	��8XR����T��8#�>���?���$
�����������w�|�)�}�3Z��|]�8���k�������.V���e��?�Q>����?��w�xt����D�����{���|��C��C��CMaZ�8�JI�\�����w�u�:	��
���R��J��_p�|�5����E�=]n}��!l-��Dd�&��u����<6V�0�z�����'!���1[(fk�"�f1��I�M�a���
�tR�N3�~������EI� ������I�zu!"r^/�5��%fB,R�_&�>���r���0�m]6t\��)�
1��p����u��I��lm��
�~��4wDeG�V���P�~�=^�,�X�6�k������t)�����X��D-G�r|��l�����lC�3�����c�
l��)��������V���0=T�K�H�R���YM��;��y%"����nc
�7GV����x��o��M}����M�s���x<���������B�I\
���^��A�,����EAjg~��0>`c|>����gf{�)iU��l�Uj ]��a�K�V�Q*iU�|��V9��>�K���l{!+iUv��XgAClt���Bh���4��]��@wV��g�=����|���g�O�����	�K���on�����_�k%�������5d��3r��Z��>����~L�f1$��N���<�
K��>d����A���<x��7�_��>�{%J�w�S�K~{�W~���O��������t�-������������O��O`b}�����������3���;?���w/��Q��L�������Nk+�d��q��1\��iT���]�6I�2
��������\�3jl����C���TZ��B8��!$� �b����f���5���{-�_�_���
��~�{�@���8�!��c��2�wo�:��ac���"����`]�#����0���d�[XeN�+e�����v�"���E�����k�#�*�Nw8�_V���eu��`
�P�804�)g�,��@�)�q 0epV�!�d�8HG�F6�rpB>rf�v�v���w�l6g9��T�r�1������J�XN��y���r��W�<�f�Y$Y+[:F�-I��s���c:�����QJ	�r��!���>&0g���M_��.�
7GoQ���,�cm+�����\�u?H��Kj�e�I�%[M8V�o;Zo��60k�q\F�m��w��W>F�ZJ����t;,<���I���������GY������f|�#�}Z*���j{��������p6��0��RC���1�LZ���j��@��8q��2#�6��n��C��Q7?FG.B#�~��j.7���7�j�����e������T��c�Y$���0<��>����e{-��D��v�&����������y���//?�k�����a����
y�����J�7_���u��k�k���v��w���:�����~~���_|����~����G���'���6���vF��cbW���@�sf��f�4�b�	��<��{
���{}D-3cU���I,$"���7��h9�G�<C\��`l����dLV[n7cm��<]�:�1$VG���;�7��258��������#�S1�k����=Yi�:6��R�{�������Y��t��e�����||�}u�(�y$6��/�m^���M�>�52�X�}��\S�j���M�M�

A:u�Mt�Y���n���8
�y!�I�z�N
�08������7~*n��'
V��e�r�0N���t�o,|C�R��U�%��t��p�k�lX�����n^C�1��f������|{��x�q�������k��[O�T{�eg���4�h3�Os0��"b�����	��6��_��������H�v�dAN1\��j���m��$G�SY�-�*�["3'1�7p����� ���IB��r�������Kv����w��5������n^�'��f��cu�}�����+,�h,�&Lp�����B��b�fI�l)�@&��r�����8��b�
������IQMM�8EZklbO��\��v{�������������X����4^ z5y_������)�Ys>��&-*2�Dl��E�������cL8�w�>�
�	��Z����N�*�O����t�����R-���[k���Y���TnX��[x�d�~f�i�����RC��EO�I�|�5�`�N]|m���.���	8�mR����oF�����h�JM�cf�_=���HSd
�z#�*fj��v���{\!����$���-,�����E���H��n!��U���X8�[��\��JiU���T-�a�U�NMgp�����`|L��[����-��jb�Qd������=-���vWW��N����,h�gq�|�	�0/0P&"��l�EA�@� �%����hF�"�q������:b�N�J4�u�&��H��`�a�����l��M���z���AV���\i�?K�	}���v�����q�e_I�I"EgR��o�1'?F�q1��P�H�]����=�X	k�O;��w#����g��~u�
��%8q��
X�!`�����UF,g%�6�Aj�%��W����`PW9uuM�dC R�3Dkm���UE����6^�q"a����d>5��������������i
qUG�|M�t���X8_=|\(��������y2��0����:�x�����Sq)�]�!R��9�L$	2^�������_��k�r���v'"K�*��j��w1���7H}���	���6��~j��w�������/��\�����f�c�j�����A� e|z/��b)�to�ZK1���c���,q��O�A�����-,�:e���l9���o�������b�B �
V�p��	/�rr\{a�>���Vj�|Z��N� �0jt�o��9j�����k�i
�0��V�kXv<rMT��)���tH���G��p�B:��`�0.��5H�{>l-m�q��Kd�3dS���K���z�����n��S�OpgCHg�(���V������H����e�(��}�z��b��Qk<��Q�����&d�2u���6���a��Fv#H{���b���\�c� ��z�I����� �Pu��W���z�S
�"��x��Zs_.gU/��3�@|q��}����&���r����0e���hU����K���Yw�\�/�F���j���^������V�^a�12~��I���Zn����5z�{Y������\4}�<�7�����PJk��m�=��5d�n��y��{��7$o�������Y�5��x�:E4�n
��,��z�F��L���:�5f?w�4Q����0�����6`�������WS���*�W��z�Z;����t��B�0L��]�,�!�r.�f;f$%��}0��"D��%&���)�m�)r�����=�u?���/������Ti���JA����w����3�������2�k��!'v��x�P����1;�����/.�������H�S�H$�;�|��Z��r���#����[d�K_��LZ���$���o���#w�m,G.�
o�c#�lBB#�����������%�Xh�@s�=�|/���E�S��f��t�Y����K��
�,"qrT�1o	p�	q�~�����dN�+��9b	��	2��o�������%���3�J^>~����/�\�����jj� 9q��Z9�[0��[G������2T��@kG��`p@�%���frk����U#+��b��f�~��thu��2�^��U��Mr��������p{�����`����v�s)N�N��kl3�)�^�m��������F��(������2!f�V��<��VnL/�R	��iFj&���K�`�i����2�S����������4
�M��
+�q�3�O
����F��v&��i_�S������p/��U�������yX���%�R�Q���z����S�)�SZ��&�i�����DO9r+�-,���K�rFwj�yX-B�����*�8�cq�*��v��2���%K�%�OlC�)f����E�|�R8p=��U�Zo�5|:�'�1�bu��s��1���>S�����Zx_�]B'd����s�v��+T=���.+>@���W���%r��)�;6�"C?��l�JsB�bA������`]jxS����XzX�2�0h��;t-����*�R��������m	�s�x$<�K�Y8����k�us�x�����=S�,���r����E�;*kL5�R��x@�z�;Hy|�����;3����~a�V���X�O��[�@��������3�{�Y�V�r��}d�����2g�]��Av�����h���Lc����xjp�}k��da|���/~�����g�����5r?����s�{W7�^ �i��f����R�dB�4��������53w[p6�2��]le<unG�Y�Z��j1-,J�����n�>������S�}���,�����O�,��8�5c5>�����������d����f��,�U)-�*F�q����_�
XY#�g
����0P)�9?�J<O���b��J��@���|�o�o�Q�^@���?��+�e?��nq3h����_����]����,������R}�{
)B��;��}D���I�0V)�!���g��>q5��n2+��6d��/�"���0Cz�t����������%�M����)<
�������7����o�}���[
����;X`��������F���+���rV[��`���
��������i�k�`��'�I���j�~U�� W5B�E�\�����;@���j��1�O)�|
�7S�s���e"���S��>�\j^o`lZ�M[:��^�����
H�
a��G�0n
�`��	g����!�yn����H�c�I+����S�+�20�l��F98~��)�����U�Ns<���]��|���~���k�1~g��������;7�[�|��e�[��
?�=r��>QDY&+�������PF�����������CZ@1<�`�b�o4�'n�����q����r���w&��b]�sa[����qk�Z��;��m�)�Q�eF�N�
�/�S� �Mj���}�]���*6�'�A��;f��sw���k�Mn���T���7�W6�UX�a�p�����V��MOC��!{�Yb�+'`�V��xj#���7����[>
�9�8�E/�����5�m:�)6��ev��L�8�)��������XabO2��r|��*��h�\������k�-����Zf�w�����l��#��0()��2���T\�~��ev"�~�!C���q����GV��h��P�t���+�z�c���%kG'^4�>��]�[=�
3����(��X�m�G��}������N
� ��,���V������(f��c	TX�s��]I�vj�����B���<dB���`�U{
7V���2����Eh�(�2�#��Cb��g�h���
�%��
�_�����#0G"��9V��1C��[�XwELYo���n
���|>x���������
��1�H����
�i�b}0U�
��B?��7i�J)�Mx����$��p�)(/
KLzG��� U�S2�8�4�Vo:I[s�j5Q1&�uSw!G��BM������"I�s�3��	��o~L���	�����L�6��V.�n������a	R�O��������������@A���h6A�,��K
����jx�94|(�~R���NlJ4(������5;�*� �+��������
��0���8og H_�R�7A�i)�CU��.�Q<�j
;2�
G������##�����w+�[�J���-o��?nW{�V[/,������%�) d�^8�������:V#�W'������.��}�����,�R����<�;��Bs��#��i����\ItV��R.�4~[C<&���y2�x9�'��D����Q��|�F�1i�`8.��*Zpc��Dea����K���<�!?O����".��1������:�sA�
'�Xi��s�-}��A��9����6�N�Z�-Wc���R�>�[x����RN�QN�c�
������zZmd�����W�_<�yI����|��k�N"Fy6�S����1�^�wK
1"��3X�6�t�(����:a����@��|���o=�0��������3��b�G���F���FzM0P��jd�������H�����Y����_rJ��j�3<�����q"�� 3q���(J�u��g��!`]E�*��~�ek���(���Ol����=������L�$H
G�#�aH����X�^Xc��B�8��v-�$Q/N��G�xFa�Z: ��z�1�[�r��0�"fR�R���2���4��9�h�,��7prl`�y_k�pA@�)L�}��1�	2�BS�h��CA��7��'������a��DG?.�.���1\Y�������L]C7�D��(&ta�P�G{��:�-��~��V�j-�j��fB�����!?r�uW}�R�2��[�A��&����|U���SJ>�1��*��$����������%��U�P-_,M]UXm�D=X4��X�]wu�v��g�����j�9��D�'^���qqhrG�n�\�4#���S�	�:I�Y$�l�#���DAV�'����e���RSt8���g��/MMK4��i�	g�(��9�2�W�i��w��?�Si�����
����sV�C����Z�X���48u�������{��(��7�������r\���W.H�/�(`)?��'��iB��o}�Y�s4����)������*/M��N8-�*%�E<�3�-}��.��\���������������Q���7�y�c(�>��U-�>�����]�7��/?]�w<�����q-�wPJ�{��{��{��{��{��{��{��{��{��{��{��;%�p!n8
with_gm.tar.gzapplication/x-gzip; name=with_gm.tar.gzDownload
��)�X�]���8��3�c����R�������ev�n���(�=t�U=���{����,?$����{�##�\�,[��~��2���wWo�������~R%H�����1&�B�u'�����z3[%���v}1{{�����������������|�.��7t�w�
��w��A!�.�T�	�J����Z��^���M����?������K�Cw��J������j�I����zs�ACHJE�V��j����&���7'��O�fg���2���2��,%,�M~it�B��������j�i���n������r�������*{`��]l��x?g{����l�F���*;O.�������f�'L+"${�l��kw��$����M�8���$�a�~�$���qDd���CDGu������q�������?��y��X�'����/��lv��}�]-W�8��?�~2�VY��0_�.���%OV����w�V���&��Mj����4���(%S���	3����G�^���";�wj�Oy���*�6���������\_B�R������
�k�/��������F[���h��������o6[tGM���J=T��d���V������W�Ad_}�.���r[�3�>F�G��\�G�%//g�E�
�C�������^��k������f����]�#w-�0��I���KDS��r�-�H������
���Q��hp]`{�.�V�����P��L!�����x�[_��|���E�LM�+�DBC�\�R����A90�jm=.Sj����e}�86���R+��!����`vv������uJAY9�R��7pI��
�W�!�T�YD�����r��>:����s�������JR�!7A�"���PZ�������T����'�������'�
����7�'G(5��74��a�5�]6?�[�:G;����N�wS
�7B�n�����L��G�1�j?�Q���d�2�#!�4�K�����Jp#��C�	��!S+*b��Q�����F��K�����E��z�/z�%M.6��/���C���?�����1)z���J��yb::7�����i��Q;jQam)u�ax�%9��E�������F�R8f��%���s���/�1���A(�"U -�iNS!�!h�C>�i�O�i
�d�R�2E�����!�.�����(�:�jE��E�(� �_�!B9���@��%�0R4nJ��s�:���4�|c���>�������>������r�����K����������#y���?Q�o?6_���b���5!��)'P%QJ������\�ARj�Q��)�
��P`���-����u��bbTHo;�)g�4���d~��{C_���(}:�����
a#�G�E�7(�R�B��������u��{�z>*�7�� �����5Kyjg����N!��8�����Y���l}c>�,vY��]�(�C`���������L��}c��:p��.��$��:]d���~�8�F����$~���������@���Ip��.4����]tb�����{�>}�
[xy�i�tHo��
��N�1�a����i3D�������
WAfto����R[�����5�]����
��./�&����p��@��H���r��2Y}SJ�5G�YI9�;jE~@��*NS�]{����1F�N�}�W�bk����8����my�u����N��8d P����%��rcjr�:�]��#��`;<��:�B�c���-!B\����i{
|ZH���2���}w�����f��!��\@fVf�	�P��� 6�I��Sb�Ea��x%��0%������]^f��������l�,������,��/�9?���f�&��N�/E�}p�n���3
=r��NM4#)��~�
|EFfG��;�z=[m�oM���.�Wo�Uq���l��o���h�����������|��,_7���*�lW�_.g�jO_of�����b~}>��'�-������Uv����w�/���f�=��n]-����q�X�9�I��<R��sQ��9;��zy5�v8j8j7o�#����oR�g�5��d<�p�� cZ8G�v�J*m��S��I
iWr��v�Q(��F�Q���2�\F�S����on1��e��9j�v^Fj��)�_�W���9��q,S�{����!FqQ�1B�r#��l�����l��_��"v��B���������u�9���+�!'������8M�����e�!M�N�����9�������������������\�x`����}~���c
�l���s�-\�-�]�83hY*��uA��g�:�:��s���,Uva�$V�h
�����Q; ��(��d������������lS���Ab��6��s�AYmiN���y�s<���m��E0��2?����[fY�?����s
�t����5]�q��J������*{��:iQ�r�Q��$�����;}�������?=�w|����x%uMi�B�E�1(���{b�����mo�^���oN&k��o�g_��-S��j�8Y��FsY}���-�5.�����u������pC7����]w�u���H!�\O��;jq��9�]���|���Si�jw�/����gg�������L����";{�Wz����+���:o1�.�g���f�����Q&���Bv�h������T"o��L�[:��0��������R�_��a�Sjt!�
5�x������Z��Nw���D�������Ci����x�,�������J���55K�G��������s���L���:U�LX���!562������]��e�@�Z�?�x�#����i������6/��������B,�L�������"3h-�����R�����K�8M-�>��1|�Q��J%
�k����WF������h�o!��h�2���Ho�x�8���]1�/��q���j�P����%�X~)!�����6%7Fq/�V���������%+yqy�����K.r&�<����&�����(����Np���O!x������}#��� ��9i7{��M��������]|�]��a�5��u�%�����:
�-�W�=-��rR�����
2'�X2�#�S�#!s���&d�`��n�q�6��p4�lLvi�g)'��Ty����b������v=_��'f|^A����s��2���e�'��j�Z�H�0GH��7a�e@#��f@3�Z2��Yef��f@3����2 �,�ff��Y���j(��S�x3�A��;#�������34
EJ.�����E�w��o�k<�D�b�>��~x6m�b�$���G��kf���!/��L����_"���!�h�n���,�4��VeRp�A�.*�����H��58h��K�4�a
X�����)`-��5M+MMMk�����4�0S�4�i�z{UJ9�:
��)kM�O�x\�k?�Q6T����Yk��L�}��A��^����ut�G�~h���1�ok	8�?@��ju��o}4��I)��&�l��)��������;�(��|��9H��2u�C�Q��20U��m�������':\&W������l������*�$��G�VX��wk�s�D�"�O���l��y�����
y]���!?o�q��g�4`�@��e26����Pt�z����{�3$*	��Ag����+Waz�d�Q��p����L�����Yh�DM��A���I�������01�u����1q������c��d���I��NC��e���*S2�{_'��Ib�~1);����I�&|�U��B��1a��L:bq���>{�'Q�B&�>�O�J�HO!E�V1)Mz_F��7d�C��2��&{��L���� ��I��3�I�+�7�-!��������{�*Y/E���l:|�2������NEoS����L�@o�����=L�mFzR!eqGL8��,B�dw��gB0�e���N��Wia���3�[��$�[�]��Z�J�h\z����3b�&�-!��_��0B&�����u��o����$��7d���
t��"&ib�1i��q���^�$�D0��
$S��S
y�1�6��2)
����@��'%@�J�����C���udokj�2�e�[�2��8���]�����uG�������I��Qq�/5����`7�-�����_�������+}���gs"�W�����_�+}�[��yb�%*M�e?T$�����R{�eZ\�k��������#AF$L���{����U�7�o��<����y*�~��U����c�������x�I������P@��=��(>1;Z���z�Ep^v�� ����xH�"S�Q�#��:��=����U[)��5jJ]�����T������T�W���jU��j
�n��+&���������
QZ�/���7
���a_��p�`,�$�Q)�sT�$]��������+��!�������!���u������#���!Za��+pAHGN���>�Ni�~/kPehv��8VQ�������w�A[����Q�N����Fw���&E-��K�w?j)��Q\��'����0��S�����1�� ��D���5�E0q�]s~RP��mH_s�w0��1m��I�H'�?Q{2Q����|�ue����c�A
�����u�Z|w�/_�~y�*��]��������o�b��l����?W�\ex����%��H�*0�������a�����G���V�x�3, �>|�4��������_�~��^=���������X\�����-���y����L:I���U�U$�nP�����G������L������P*Jv��
�����^�R�E�){5LST�y�1u��2��Cv��q�'FH�������+jR{��;
��h���
���?CW�,�g�(�����-��lh/7��	�y��Jk(
�#�F',�J�!g'tY�
E�E��No-��j���S(l7����r(���,�?,^D��H^�g���J�1705��2Q��'�{U�Du��DQ���R��Il�t���F�E�e�E��U�Eu��tQr=����ZF�EF���L`3�"�9�s����Z�E\�
�>}���/yu����[���������J��H^��z�����4������(�F���JV4#���ZX������e�b�cY�������]��[4v��t�����4.���k������������)P��d$0�z�a������_���iZ�����e+�����������"��Yo��U��r��������8�I�m|�{J�	�h�����X[�*��!�\�PO\Ze�D�(�Y
W��&G�����Gys�����/���q��;�+����<�����_�A:�>4}���L�L3"�6�T�����������]�938���r)����P��
�Z�~�-��[�=;��'.�8|�Z��R��*Y�(t6�E�Kl��@[��Ti��iy���z�I*}�V}:���(������b�����o+K�����b��������E6�^�u���<N@����
8j@<Y-�����@B�T��ic@��[��4PL�*�K�i������lc�
hm&�T�^��Q�'�VZ���;�V������"��y�z�Q����MYk��4�V%7�V��T��^Q��t���������8�i���������%�]��:��\Y���R���aZ���l#��n��ASl6��3`��}�������\q4YG�k`����kJkZn[�c�l
��cE��V��n�r�T�R��/�#"���,�/
�zG��Y�j>Ca��9o�>c��c����sR�g"�b���
��&�^t��d5V�v��w���qx��]P�jQv%T��EW�0_t�����MkEg0$��E��.����Z��P|�*�����Mn�H�����������;�Q;�f�JR��J��Z[T*Ky)][;�}�q"A�`��_ajZ �����8��(����R��C�\eC'��r��	z�	X41F�z�zT;r����1 ����BD���7Zx�o���9x�~)*�8�3�9'G^!�up�k21$��U��o�P�j��XbL��`��X�x|l�l�QFPg�������7�M�����8� ���Z9n��[�o(��l]2���edu�SRWDQ�w;�*��i|�-����� U���s&��S���������
���-"@���E�������on��Qw�����p���@�j7�9��h���F��$54�'��~D.���L%[ ����������I6<�[Od*��=�_��V.��J`9�$X��2���Eag
O�[N�*�����D	 N�U�,p������>am������)AD��(�G��� �O��o���O�%=$���"�J��L�o�(�r �R�7��.I��m����pSa	�JfU�bt�V�U���+O���6�m�Qy#P�[���!�w�_H��+�a��T���-v_���M;,����T<��x��o�~��:���ja�(LhvJY�lf���D`xb�H��a��H3K �I��\F�R�5!Y����Kf��	'�TA�;�[����yf	���>n,����(P~3�V�d.��CF	X��}U`��J�H�1F-���>F1oN������j���G��D7���r��	������QT���!�[(�S��[�$~,��mh���p�[
�,�N�9��Z+��O����]J�U~�/0K
P7�JU�Q�`<�A�������PW`E�J���_(�V��9`9���Wa��<�+�a���8~���\�*dy���������n�U4����Sk������o����{q��]#�3s�.Sj�#�2FbZ���<u�-�����k����jK9�tI����`�z�}�l,~Ba�!�E85���X��z�S����q�fc8+�D��$�o�"��,��;���i
A�����m����vTY0�+M�*O�n�Bn���	�u>2�]e��P�-v7La����g��u*c���X�Q���f������~��l�s,������`�J�F���B`������CDe��{�����MU�&R�@FV�������kSU(W����T4U��0Um|_��B�2�#����G�NU���G���R�����?��%}�����Hu��/�W�1DP�g-�Z�+B/:*�RrY���s��x=6QG��e7S_
K+�����^a����a����2���)�$�����L�{�"�f9F����T�������qG�X��mr�JK[M�\b�id��m�5�)Um���G)���TK��<��t��
_�#�oc��oI��w���|��������������z�{�b	�����Q��x�@����t���OGNG��z*,f���������N�+NbE��S�����SG��"3�eD3;�l9X��\!�[�s,���l)�+f��Xw��t��$�G�Un7��c:����7�Hf����sa��~���^y��M���o�)q+O(�:���!x���"X�������R�~Q
��a�L�����B)m���En����H��H�X��Eia�w_d�p�CW,�E��T*�R2��%�
��i�/2f(\�/r�c���Jz~�*n���/R��`�����EF�������mXA�/��9pLh7Tj17��6��
_�F�h�@b����G�����)�Z������R����������U�+����>��n�����
���M�����6�0
�C�c7�$+��L�����v�����|��������f��r��x�WS%qWT
�e�������4gS������E����P���mc��[vk{����a�i���k�r�5��y����[���"4
tV�a<�T0�w1K��s��.����<i�Dv�rY6��yCV��z�;'�uj���|CgT7���L>II�1\���y_m��]~p��V���.)��Rz�%�2��Z�q5����<��2�Y�%wa���d"�P�]�V:��V6�T��
s�7Om�xn&N�����S\�/�fy����g��L"F�b�2�Z��x%�2�\���"DT�ElQ�d�"��x�2�'�s������l���9{����/6����j�2G}�		�����a+E�����hc��R�6�X�������������'.��]r��/pS�5���IP�IC+�W�]#��<a�,G}��h�)/���1�%�������o?�x[v���O~xx����'O?�����[������2e�dt��-a����l��-a��������K�����&�D.Q�+,@�z]�����J��%���pW�<hv��q
���A0W�1P�R>����,5���G�FS�4N�"+�j��JS<$ga�]/���NT]gL3�G��K��H�:������y���j�5����1l�J�E��V`Z�C-%b��K~`NH
��Ob���R�a��o9��T���O�`>z�br���9�������NN]!��U��|��,��������������	��z���C����u+�D���T[r�5����,������t~0�������SBF�(Q��f8�����#�K�8OBq�����A��)�(�!:�s6�*�y�l�j��;��������$�+
��o����x�����y�s�?�>����V\���&�^�`r���GN�<�w���aP���4���8���x��7�xA�w.��?T���R�y�sm0�|_�_����������hF2+���4���'i����*�^ L#��s����d�}���6�_���S2i�}C�����5s ���l<(����E���m�*-�
"z�Y�J���SN�k*
�0�0��9���)�����k�Ie]XU�J�4Sb5�MC�����T����[S���#Oc=����8����e4�k+[�w���R)?�.���kk%.u|�)g�/���v�Tl������k�O�*������_[�|����eB-���+�=k�u���������3����<tNQ��
w�1c�7V99h�|�<���<��b�q�\,���9V��c�9�\a�?,�6�]r�}[~�}[�����8�|9���{�d,�@�yNc��Hyc������z<���Ra}���T�����S�S�������YC8�JW&��t��:s����5����[�G�c%��_�\5����^_�q�Tf���������I#�i�$�WL/�B%}y��.:�����PH,f^��\Z2	r4�Q`���H�����u�n@$T:�o�����a����X���Yb�
cWT?��\P{�9���N�N����X�����L��	^8����j?���e�X\�yU���`���HV�xN9�K�SE#�1�y�+����+Wfa�n[��X6a-6��]D9��`����	d8���Ye+���HP�����K��E���2�fD���������Q�VT3��r����'	7s%�i�*6�S(��7���,����0)g��9}V�,��<Y�:��/�<��m]���;/oR*h#;.�w[�e�,4F�W��c|���c<��;G@����<����<��}��=y�VU�()�l���2���4��Vi�����No�R�>o��s
(%L(�@cp����|!������2~�`�D�R��bE0�m>�at�����XJ�'.����dc���,^�6����
�U��*��(���[��E�R�106��� ��4�A$o�Z�s�"!0�;Z�h�f������ep��/�����)5�g'$ v|��mpb��Z9�a
�=�T�u�8�s5P�5�D��@�T�t�s���q�]���nC<��Bbbn6&��I�tmZ�a��lPhT��11r.-�k�RH�Z���T2Q��dn^��kST���V�����1�n1E�J�>x7��=������q�������%0
kEQ��b(-���y�"�V�U��v�������{�=�<���5����d�����_��f�g����;#8����<o6��o��A����f������������������N�������+�[;d���R��8������yr����Yup	�\N#"0�Q'���r�+���,��E�Z^G�bz��u�}v����	�����z'H���������O��1���p��B��y���6�~���� �V$#5������y���@��y����U����s!t+M�-%��{ ���7��
#�W�:���Do!#��"	2#G���my�|����{����(���3�O��7�������?�,�`Q�����'�c�O�Ag�<��l7;�����H7O�o���*�W��ahL����i��a���>�4�
g9|����	8��Cwf�\u�i���f�.�a��H'�C��R=��hi��?#x!��4�����B+J��T��FTh�iy�!eO���z���6����6�igaUW�X�+eJ���w+�i`'O��A{W��fd,G���{�k�����j8��V1-HeYq���� �@j�c`��U����W���mz���b�0��s+U�N�T�!������������6������_�f�'�K����+��W\zn��z�c5���j-�^�b�J4V�����P>L~s�j��<�m��\������]��^��?��Z��M������|�_c�Xo�cyc��j���m!t������������]k{x������gU�Y��y��v�>���r�D�J���$6F��<�F~�}�)0�N4�\ul�
mF���[Q�_R+�e�����yE��wt\K�Y�xQW2�
���i�p*Y�J��l)��;�P�j�U�������U�g��U+2��7
4��BYiwCK(�b� ��Q417U�,2��Y���y.�U�=��bU���+]>h��hA��b�C�g����e<��dl��p��a�����l��
��0�>��X[3x��`5u��('���J�j�x{Y�9�7����,�Y�XD�a������=o��n�E�f�(���AiM�m�[2L�����T�_��(���������������njd:� �1)����C:)7l~9��������i��:��k|XG��������,�i����0�W��b�N�i&C�FPL�sV=�X�8z��������[�������Y����������
�������D��Rh"��WSV����.l[�l`�(��T#��m�d����8G�2W���
��xt���6�e���*�y%0��\�T�����7|�{�W��
Rz�3.e��1�s�4�W�
Rz`���:���|��JW��
R��*��k�W����M����[*
�H9dK9���+�T���(K\vI��t����}c�on��iu��o�_����4��tl���:�A�������������.PE��Ym5`��J)`�U&�XF��0%�
�x`� ��>���	Ry��A��X5Xd���X�BU
�$�SY^�����Z�x �[j��lQUR3���bm�&�2��ZkTa�-kH��15���Z���1,��n����[����
yH��P��V.{+��a=(%��nb-�dV�m�6����q���VT��8���:��O�!\����F�h^��g��eo�����$Dy
�bIU�V�~
5����*��6)�R����kx(����B�kk��<
5Mq.@��+�F��)��T�Y���Qr�+S'8�Bi
#0s��	��3!�^3|u���h�ikYee�6��i��W���p����7��x/^9%��/����Z��t�-������<]dTO�����00�0��J)9�&���t"D�eER���k�K���>��G�^9y�B�k����+��M��|��x�,9O*�XY��#DE�(����HRSN`�N�J�~�1�0�w�L���e�]�k���_�C�����z�z���avlQRr��C\3������<����8�������&���q5L[�vD-���� 
�&������O��[��@j���6�0+���J���Kj[����
&I����������.�as���K���@��\��
�J��$��i�7Gj[�TF`��H���H�F��Q-"n�h-��QD+1���T��;o�Kt2
������oE_l�%(�������o=�/�2��3R�,�k|$��'�J��Av[�A
�6��N���Nx��d_�h���B`-@N��0j$i0����	
���AP�]U5/��j�����[O����7���)�xU;wS���>.�.�Z~���[��u��o��������G����l(�[�GC�z����U8[�����'Q���MK9�8f�]O�r��RR�T��J��&�]�Gz= 
tn�1�L,jt5�qV3�,�C����v�y\P���J	��>E�J�7�:�bV���V�~����6��Z<hQD��J���?�R�[1g����o��A9H+�9>L��L����:��1I� ����yy�U|%��
)���S'�����]�V�:k�AcW�e�V��{�54j������GI�4vUZzq��S��h��`*�p���W�Y@cW#	TW�
1����Z���@ �kh���K����h�@�D�g.�Lk�p&&�E����m}}���0���,
���fZ��?'?p����/�f�^�4���0��U�����,��$�R�z��K��9�W%c|��%N(�,���9������o=����2HU����Q�	y���[Y�0�(Vz������
�I�]��mq����g�����b&&i�Q��y����eFC�����W-�a��d}�[�v���wK�b��vO`��n`�v#�{��8���
�����]�~��8=��8���8���LJ���
;�z�VX�V`A,-.������}����\-$,�\	]��W@���o��-	�]qU����w�}Cb��^7��&�����L2K/��^)	!k�1���0���R&�U���sX6��'�����LL����'�������iP<����m[8�PI��S%��I!����G��L���IZ/�^����^�+��1��Q���c��X� �	E*�Q_���Hp�cp��f,��V�K�����E��XG
�N�6E=�����dYTtf�;%!���Q�(GF�����xC����r�	i�����@������n?-�7x�qC�J�l\��F�����t��b�t�P��@V�r���>��Q�%���|H�N&~s��M���M����MU�*�(�f�)g%�GO����*����f�� �2�
e.�m��YBa��6�+,SJ�LM��&����^r�~�<���?/�s����������K
>����@U�X����3�(j_�ylJ����<a���������a�i��}��r�c��)�SN����o���?���7��d*N�L��Yz.~�=������0u_(0��^1e&>�3�"nK�#�R������J�$z�8>�	V��%z�B�8����\Hb���yrJ��������������y~�}����9��y���5�h-���������%�%����N�3�R��������l�q�L&��,~����_=>fmedi=c1=���}`-�����"��Ba�������9�n���/�%l�||��67��/���������
�d������y���4%~0��J���>,En8�q|���G,���_m��K������ED0�����,���0>�S����X[8��\/D�
���^�,�&�_��\*Ke�/*�?"]�?-���A_���",��N�M�����1�_�0/�a�`tk����=��G4�/���
�W�eZ(������<,�4X��XL��?����0��^���@'���j��x��������2�'��C��FH+�G�|vG���^/`oH%�b,=�V��d}�Z,�i���y�����G��&����o$"����y��II���p0�����P2�SZ��/��r3�9m�7@v�[�3n�b��2|�K�*�K�v�!Ae�	t�'��73e�~Q�ha�P	y�YF ]u��z��h�����/������>���u&r���<�R�>c�Ce�����l��t�&~�I�[�^�@)�������/�W`_[�cv���s�n4��Gkz����P@@���a����y��{x���%|�LK�q�o��V4c
��7�O�+*J�j
L
�-��[[��/�5���_XI�������K*%&����y�1��q']��e}�xOM���M�#�����H	f�~���A]����Ni=>�*@r���~@4��
<�����e�WOY�`V�@$�EzA�� ���X���^��<�+����"�a�e}�aI���x��Ok��l���������MT��������%l�x ��*�_���k%�x/�����	n������&��h�s�J���\/������~|-�� ofL�M�����E�G�/�
��-�7�����%���@S�hle��h����g`)��9����/��G���Z�,�Gm����i��>l]�fs�o:[/��0����9�4���ae8��K�<P�Y���Xv�y��s<����X���DT���/����>C�e���L��.�l��;�CYx^��I'L��j~�O��k���2a��}%`�J�����hO��:���A�������Oi��J`�]�_@u������Y����K��,���Q��<�$�����+�����E�6�\	���~��m��F�F�z�?$h� ?di=S�}�W�>�����d���p|���[�L��Y_�� H�x}W���`O�����a�Oc��a�i���y��J����y����T����M��y������
VY���>�h�g�+T�����q%D�yD2t�yT��s���q���8����6���7a��}�`|���zC�FDk|�[�><w_�o�l���LX�]�)��R�0��xK&�W�J������6���+�V�s���������(���������%����������|3T�w���&��*�����y�J�e�F�d��X�L���x �C�~��t�Eb���D%<�^	����-�s��"�������y����?��_�����Q�a)��$��f�S�������`��eL�1&u���Z���G-���^�E��^�t��A�� ?u��$���<�%m��A��=����A7�9{Lt�J��.���e}a��-�i=���#���>k�S�G�}�v��<�g�=��_�_���x_��(dZf���� T���-
z�L���[F��&��g�w6��.�s%`�����Z���"�^r��kP��?��o"���QY�9G������U.a7V�Y���	N���K}���*=/��8���o���X�"����a��A�����+�(+�J�t J�e}�e;Lm4�v"�|��A�F������������O��S�-�=6��`SY�ez^�ob������Fd�7e}0]����'y��V�<[�����i���>��#�8�����S���~�^�xP�1���z�������I�A�7��>(�2������c=�$L*���>|EkZ��&��LZ�My�� ��p�1i��\!�������g��U���x��o9��&"�"^�����Qq|��75�������3h���>�n����/�E�\����An�o4��������/��Q�����4~
�����k!-��uYC�O�(����
oC���9����	24:W���Y��J����}�{�Y���i���`LJ�+0�+��8�����F��`��(��J�6�J�_s������&&9mb�a-�i��G������1��b�\�k"�����D.)���"�#��7���/�e�|��V���X����"��Rt��0=,��R��6I��`c��mq��5G��zR�W��M/���Rc���Oi�gK����`5�=*;/��������\4��*�c�c�R�z�d����]W�vVJ�"a���7VT[X��4#1g0���7���,��=������J�s���!���5F	65���]ts����A��;�V�����)w��;��\�Qacz/CX�X	g&�O=w
���J0a��Q*[i�����ea�_���l��k�t��3|�
Ac�W�:�s0��*B����JM�n����}�P>y���Jux	�����K�[�0��V��|m4|����n��Z�b��Y� �(�vK�z���	aO	0$b�Q��*�Q�����RF�MS�[��0+��0�vl���a�J�A�q���s�a��;�!�<��_�s�`&�gG���>� �u��yHD*�����@]<����\Q�'�A�9$,~P�%)<�"���,���G��_�|�d��������^~����mv�o��c��o��d�{���7�����!�o����^^���U����o�y�4)���������M)�,�V�rCrKe*S�6��"=laf0mR��I���l�n�L��<d�2?�l)m�b��~:{��AJ[.3H��u�;��?�k��?����~�������[�l����?��B~t����10!x�����������Y���k})�:��iI�7 e�t�Z����%��E;���
�����D�ekv@�2�c5�
<`���%
�J���5A�A�N8��	k)�b��Q�d�!��"l��v�]�70�L�C%|��E�Xa�^?rW�O�P��F���e_&E������t�i��:��Z��X7�T)�T)�TOZ���e=)��	����z�I���*&��I#��D�7�T1��������P9�$����P)S����	��q�X*���P�L�]#[�v';�C��E4���������Y?Z����
t�c�i������@��c��M]������,�����D���5$`\�nzP��)��x!����B-d�-��Y'�y4�����!�?Z'st������^����#�N���5g��&�o�B�c�6�o,��`����8w����qm0@���&6�� n�o��T��1��	�h5Z��)^m��z��
�����P�����kY����he�
*���nM�����!l����7�ww/v��#��s�?3��*u���E��N�y��|�?IP�1
�bm9��*���97��sL�JbA���N��h��~��_4�4�p1c�
�!�D5���-LEZ��'��h�ZZ�m��4���x��KZ����,;U���h��:Pi�|O/\�)��Y�oY��VB������j������=eD���(q�������[4d�����'C"�:qL���PB���b=�8�������1���9u�)���`i�KG�L����E��M�^���l�S��'�#�t����E�u�+kK	���*og�w��o�~��w'�� -��\m���6�/������_����bzK�-N���������������o��/*��28���"�n?
sc#���V\)��z�������5���"l4Up�0���3�0S�0�1��a�*���hL�)��X�"�����`��`Y���"�i���UP�T�9j���h�C�&=Zh1ft��JZVf}k�7�sH�����!o��FP`f�C0�fKz5�U'�3}���N+Na������`����&�����.�_v�k&�Q��/�QX��X%Q�^��fk��#�nc}(U��,�/]b���e��hk��O�T���T+ho��������*���4@���K�0DXtP�����u3
o�M;���w6;?o}�����Y/�s�����}<�hf7^|�UV������lTM�=���0=�?�z����o���������9P�+���{VT�p7�����i	��
�����YW�����*�;������O�P`��@��g��5�_k
���Ew���x����$w��������=��{|y�T@�S��n���Q�y^]���u����UT������wi��6k������|�?,)������?��������9�=�?����S�?fk��4�Y�+"@��}���L%�����U���d�5���c��6�_���Cicy-.�b��J�zP�����GR������E/��e��!D���W����-��k*�@�0\T��i-���+1f�wSv��1�r�u�7
��;c5��V���5�(��l�_y����`sk:�%�3�t�F����v�h_��s�w�*L�aY�Si+�@��97}%$�]#��������M�����2"���ckb���6�
�G����:�i)�&f���D��g��j�G��!
|�z0�1c�%�%�3��DY���4���g�����-&b��'V��x�+N1&ob�eg������!g�}��{��n� 21p�}[t��T��������$\�q���e�*#�Y�`��l��v����m���p;�������.�`p����_�}k>��������o��];��[����s]��������Ux��	�ND�'��7"�u��(���"+��l�2R]:��s��0�g���!���������^�4�O����S���2������(%-A<~Z2	rTG?�6��di�#����x�k�� *)4�J����X#{M�<K�^`S������ �Ls�a��A��`�*���v\2@���A��p��Bv�X���$�x��9VJp�`����0��M�r�	K�yn<�0Si�A9%Yx��f	&���Uy@����m%����T%�����-N�l�v�,���L�a�Sp����ATZ@�x	�xZ�����|�/f�r�����[��V��.�}�\�t��N�D���K}+�R�cB	#�N�Q�_���x��R��Q(b-Jn�V9}�H�y�,]'���a�)^�	����#��: xT�l��1�����:����:���hSa��>mQ�5ufUiuuQ�)d�� �uV��x�;��h�+���%j�IN6�21�p���DV�Z�����	��|��VM���$�"���7,�sl�$��\)�|R�Q���E]�gj'��P�=qA�$m�]�A���x��<����S����A
7�|�Z���EW�����M����V0X��9$o�Z�s�T(����]�u��*Do����]ZG�Px��[c	�(��x�y'S&����J�A4��������62,�$����$�oW$|��4����o�s7}�2��	�e�2]��n�<�����J���eZrk���uC����fX���1��LQ��k�R��O�JX�b�\Z�J������wyOfq��J��)�5��)�b�Q������q�L�r�)[���t�]J�g|���|�=O��R�C���O`	I�+���xXJ%Mr�"b-O��=��:�>;��_��s�0c��J��}O��1���n�4V���?|��g��+i���o�����K�?�J��Z;/3Vr�/����^�����1�E��3���X�%����15�����yw�z���je+����cC�'v%a�^>K�����i�k�2_r����w�5�V���A��.�u�$[8�4����;[s��w2��2��������f�[��w�����1����(
���<�+����	[����B%��xZ�������eg���0$��q��������&���2X.�����s��)},#3��&����o�x^=`��mh`��0�M�����@���x��9��*�i�`V|����Tx R�>�hL��� w���{��
r�����L������v6���Q���5�k����)��X,���>��.���5��1��bRk�\���^�\������������Y�C�G-�<z/��K��j*.�i�@1Q,�~����'��H0���t��/�NY�f�\P�RP�Z@���B*��2��DJ0b�+M��J1�t������"
�8�s���o�(�3�jA���j��6.Q�U1cD�:wb�i���EH�*E6�pT[<�Z]���SI����+AI��b%�<a%���f�!c#f��2n�F�o�Z"k�2.|6(���n8@�q<��K��X� ����Q��:p����V�^�n��o�9���@���P�py0b,,S)�0�]OZc}����77�e(�[djF��L����Y,W��.z�$1��r�9�,���c�g��.�r��2D1�U��s�(%rd(mP�TfVW�0@$���S��
Vdu�(��:�+Mi���~�bi*�7��VZ�&�^��X��*^�k-�m�5�0RPZ�7�	YI����
[���[�k
l�-G�����V��aTN�$��'	Q��$�
`�
��F������5�v��,��������
^II�d�8w
�@�(�\�RX��k@�
A���,5D6����s2
�Q=WP��ac��4~{jb��k�p?� BH�!�����d*c��p�� �
��0�+���($��Z��I�����Fs��-��R�)F)����Q�����-!�a���f����H�bT�p��������z�aQ*�r�O��]�y�������Q��fY�	F4gKTM��
+���\����4��m�����Vf�RU:
y��!F��C��������
A�e
Cu�F��eC��t� �@��b�Z��	�1n^���<49(e��8#��~��K@$�Y�4FW0���o���@��]@�z���avlQ'7�C
P&UtdN^�o��~�����LZ,@D@.��F�a�/�J��=��+P�%�J��i��K]	B���a��2b`+��^-��?���V�'�*��#��AZ�R.h��@y��m�R<5�;+U���`|�X��p����m=SS������H��.7��H0�7���?�e�r�Y��x|�
���d���Mh��;x�����MP��5&�wE|�zp_hLb������\e�M3��+A��m ,7��d�X3i$��C��H��my �z.������?:�W"d�e��PO	d��I��'�j$�o=�$���]�>�C���[ $�(k��5��,9����[�F��mBn�;��'#��k��Z�[�uY\�0��=���o/H�	�f,��b
��V$&��"��	k\unX�����Z�,'�����G�!B�g�Z@_�9"��"���oA_	�L��@��/�����ba�i��S?V���� S��`�t���W�VL��>�[r~P�G�����}�ms-���Z,dT�P��5��[���`�
��P��T
�P�g�)��J�6���brG�����I���YJ�z
��AdW��BU��I��!�0R[qR[�F���]�Ve\�������`)���i�/x�sp�5��}�s��������tt��m���[_��q%e�/E4�L�9��SL�}|y4
{�9�-~&,�E�C����-���3��9�
�0>Y�e$j�L����?L!�^�Z�0�u���+�7��������6���]����,�F8�2HVqs�ky|���=�/<0��/������Z�0�U��/I�\�	6��0Jw\!��\Q�W���n'��{����!XZo�$r��ro7��wJi��h�����;�����d@�
�����	����)���,�]�l�Zq�[8���6���<T�v�����!<�_E	�����(������Z$�#%�����M	F�z�ah#F�����}u1�v�)J3Q�W/�t������!���^�1�f�����t��p��yqb��>��
V�BR&I��O�0�����EFl��Q\���Q"o�����s��Hk�Z(q]]!�����1�0[b�������d2Fu�	���� E���o�������5���jL��9��!�d-[���C��1�c��P����<�j�}�0�6�F
��t��=���1���Y��q�ku�c]-�a2�H5j���<�iv��i�0n\U3�&�|st�P���TT��r�S��������j��u:��;��m��6������U�*�,���)�'�GO����*����f�� �z0<�Q�}����/�L)�����4����)���S�|��9�����RJS����������������yDj����>�.BK���)�SX.�c1+�?O7��0��8�������!Sb����1
:���a��	v�f}�L�	��^"K�#B���<��?L�*���}��B�D��2���b]x�o���}4�,#$�����yL�YV	Y�(���8����$Vj��'��L���<�%�����Qf�~�}�8���9��y�����^*���������%�%����N�3���Z�<�!�)��LR���l���������L*#K��(���W���`~������eJ+�c�������@o���fP��/�����H��z`��=Q����:m������yD��_��r�W�a)r����~�4>b9��������q�����`����TX/fJt�b�b�����p����%z�H���^��%�[k��Y�M�����T��_T6D"�?-����}&?��/,	EX���R��'(�7cK��)d�����#$�C�<������+�����D�����2-�M�GLaz��-�k��=�?����0��^����5��SM~��cO��� Y��X���`�����FH+�G�|v����^/`oH��,/=c�FM���
?����h�<�jz[��JI�G��7���KE�<�%�������0����P2�SZ��/F
������ ���-��D1]���%S��%�#`�� ���:�����2^?��R��^�0���#���l���{�R�K_@I����}F)w��3���<
��rH��P��0u�9H����M���E���� ���g"d>��^\�a|my���
���s�n4��Gkz�����) Hx^���C�����y�6+]�g�@�p���[�������7�c���-��5V^������u��/�5���_XI�s��|��X$�Sb������q�-��N��9�����D/�?��G�1a������ �K��&����z|�U`R���~@4��
<����)3�z��~���!/�2W����<!�N������[N��<�Hd�oY~X�z�"������y����'{��K,~HT�������fJ&�
���a�Z,������#x��6�-<	l"��>�X�1��t��~��"������������b�q�yB��������a��%��1����^R��u��?������&{����R�s����O�L~��Mc��`��K��_���[��������j�$����h�5���� �y��|���e�U�h�egX���>�41�z��y��J����Ay_�g`h����?��7�E�
�yg|�(���a��VK�W5?��'q����+EM�oy+�j�|x|��
�nt��[���D� �K�e�����4�_%�;�K����E�#��z�h1$��l=K�e�G�-�8��7����j���,3|��XHOok�an����X_/���C���C��3�G{5�������_V�3�<�*��V2�o��*�^����4��a|��o����d}��c�i����	F*]\�V��M��S�;�_4���qL������`�E���3�F}��B+=oWB�G$C�\�Gu>gk�w�W�����k���~�o�w �g�?��0Dm�G�����0 ���p������/>�����7����)��CL��dAux��_�������a��j|��C�0���������Q�9U����?V��Kx�����Pl�I���
����
��z|�`�v�>��l�������������}��o��N��O��t�����x���9�X�X�QB����������|����g�/���K,$���4���ua���_-{�(c��1��lfe��My`�*/�X�:����
�����$����� /i��
2<����0������`�+U��PtD}-�P�5g6�g���`�I��>k�S�G�}�v��<�g�=��_�_���x_��(dZf���� T���-�G'�����Q��I�������+�K�\	�ea�d}<��Y��Kn�q
�z����M�0*�3���������%�>w���?Lp�U_���`�T�y�~���}�����q��}{
���T�_qn�f�*�_��(	���9W`��G_`'r����Di��O�>�1��j{�a/��*���<�R�c��6��\����&��a�:x��M��)���j-��>�3<�����%}
]������O-��4��y���T{�����&��}.%2O���4>��4�K�x/0�W��Yb�ai�d�?���}����u��5��g��c	(^"��:�  �A�KA����B6�����_O�{&�[u�k���)l"����?��7*�/��a=G{��ck���mI�y7�������}.��e� ��7���}���@c��X�(���yY�??����y���������?��V�SaB�X���r�?%����U����~����[V�l���h�0#W��zk)���pq���4��U����,5���Li�:Q��:h�;�q;g����h}!t2-���B��Go��^�lnM��1��:��(n���������?9n2��fz"��
�p\�Q�H3$A�)8T�Xs�?���Z����*�D_81Y�Kb�g�%��{2�����Xw%1[?k
]��NM���O[����+-����k���w�a�(���{/XK`+�����J�c6`b&e�
�i�U5o�hM��� _�W���[����uP*��d>�X@D\��*�WE3��`n@'�,\���q������'e�R�C����oWR�s�UR���%9a�(�j����#���2�5fp�0-UeS&G��H���\�x�];W�t|[Q�0(U����U�Q��H������CJL��L���sOy��Xv	Ly>0Sy�)��u��A~����6�.�!�+���qf5*M=x,^��
���Q��m�<��Ys���l�d~"��lP���c4c�J�1�w��K�2���V�f!i�o���+�[x.��������po�]�&,�:u��&u����YV[��e�~�S2����I�)�#5e|�X���S�r���z�K1Ra�E0���[% �:�m~M��P�=,��H�h��#�$�:8�"��Z�jU�����yK�u��C�t\<B��E��ii���c5b�IR,q�vn$�C���� ��x�BF��X�C����MG��! 6X�	��W�E�V�_qvL<����^P��\�'��.�i2S	�u%���!6��*������->����o�)��3*��kU��R�C-������J�-&*����JkB�*���������]bs���lK���)�x2�3�J�b��������`�)iF���������2�PU �'
�X�^8��g���4z�mJ�a`$��)<,�*(����y/4����~���+�b�5�T����������~�(���^Uu��R��1������Kb�r�n���o=�-�j�6�	&��k�1�v������A`����x%�]���1�mh� �����
�)����Z��O ";�@��E|wuP8��V�����(/�P���Q�`kW��D�*�6��m���a�0tF���5#@�o�"	�7���m�6�oEF�;:*`��f�]#�D*#�j%)z��L�n�8[a!^������b���bk��u�G����e�Q�%�x�������z��O,���X�����
�j%b��#�Y��g�����,��k�����L�X�
~��g�g�Y'3T�)f��|�Y��U�����Xxn���
 ��Y�����Y���h�z��M*�����;�=���v�K>&y%bn|PA�v,8��J���Tn����>{j����g?�����.0x
R�r�[�[�W�!�[{��aTX3z�?�O}(�W*zM�j>�
�@14���j��d
Z����+#�Ac�6�mw^S����r��c����0��	#_4��E2b��K�:������b�/X�\
<]���]i���2���b��
�;�j]%���`(��+1�\���	�������`����ZP�S��X�T���hR"�ng`�xIcld�q��|����+E:�Y�D������S��WWv�P��U��3��E7BkAU��;������xl���&�-��`�E��W�XT�#�V�qiE��qpq�}:�;���[����������!EL���h���]	���l���!:>�uK�N�	�q��K02������[�������>-C�?B~t���p�/����O./��N���������n_�%P��jb����V�������k����:����+�jt"��#�����Rp��Jx���
���4v��H$���V���DP��"W@bG��DbK�)�������O��;"m$[RLRI�i�yM��`c[���3���jM���BSd3�m�z�PE��!��~e��X��������.U|+��l��#��b�zY�
�T���U�[q?�t21*�mTO��*t~2�ec���0�n:x��p,A5F�:�?}��3��6.��X` P�m�z����ygp���L�'b���L������4��4���6T�W�����*P�,F���+IP�&�i�$�u�;�[%��N��a��F��4a���u��Tk�X]�Co��c.�9}�vu�u�k�+e+�Jg�����Z��"��:y��`�"9���sP�Z�Dq�������o��v���h�X��\��D�v���&W8�����4�Qu���OcO���6�FoB�+M��T��)�������#����b�$�Xt��x!����g��n���^:Y6�9:b�L�l+���Q�s�"��r���5��D���<���� �����,������-���r$���I���*D��!��m�=�[�FB�d�uY�?����n|�.�����)��q�-iu�d$���J*F�T12/�[�z���O�������vH�8����[��L�e���;����A-�9+}��b��t|��-����]���5�>1��+���v�4\Ii*f��w;�Z������w�8��M���?��>������wAI�k�M��b�V���)�g����+��������������..��j�sr�7�����\���&�������������.�MK������I���	n���\�Y��H�L��4��`k�T�x�������G7o�i����/;������\~<��e�B��v����N>�9;�1��	�������r���r�2���������p6�}��PM.�/�/�pp�>�����|�~{>��������������K��&��_�������1n�����zFZ�@����e��=��P��������e��6����������n����Ss!�*_/��l��rv���$o#}�/�R���C��D�=�a���&�N��8���i���M����CM�����u����g���P�s/�<�yw����q�k��h��������_���2>������2<�O�A9�%Z3r{��h���@�S�:�%�Zt��&^�N����������{/n��<q����-:��?4�����������?�w�����/��>���^K��`v��S��Tj6�e��Y�<;8�<G>��v1����X���������y�]���	�M��
�U���Xp�U3�7�;o��|������c����d�����)��Ll<l�4��_�q���m�g��lG����
h\��AFZ��`7(�75W\��XR�S�}Q�'�������7�/��-����Jz�~�{�n�F��(�S�{K"����	cNO&;���s����%���a1��,VZR@p�	z[~�%��|��xi;�r�9������;��~����fo����~	�V�p�~����!�-�����<��>/�`3����q�M���#M�g)����x�����7r��C���������O�~~����l���Q�I����b�Yx'G��Y
2�����z6�0���p_���MM-S�b��zFU����g���-��?��9�e�������^?��R����%�?��r�I��dN
pQZWX9AxmDw��������;9��d���W��D��.������:�����W�������?,�u]~��d�����~��}��)3J]�����V�H5�b�2�rP����ocbUC�SY�NO��'^C��W�k,��&0@&	:�+)�H2?��o�z����-~y��~{���;����	���V�)��T
W�^�����@�zw���Q��5xf�����m�uD�[�N�h� $��	��cL��b���0����[_on��O�=:�~�z������~}�����hTh��n�U5���p&�9}�9�
Ty�����ox�
�N�&CI�yd�|��{����;_�����F���������O����u��6���z���B	c���-;��/.�$�q^�}�9t{	�7���U���k$X����������L�3z��!���k~����7>����+�z��S����%v�����&���pc���dy}*�I��Z�D�uD3it������U
�cS�1�K������>��������'�.����o=�=y1��t����[�����r�4��1s��L�%xL��iS�0n�*mzS,uZa�'7fe9��H��~���x�����[�����������{��>|��S~1e�m5Z4�iS�S^m�)h*�{�����6O�7a�f*wq,TP�����w�=~R�V�M��O��z����z�����7����y����G�D�@�L�>�H�$n�=:�8�	v~r�i�]��>��k^$��
�hk%�M��$�����&��r���?�s���{��D�6!ol����m?{���:����Ku}g&��7�A#����8h}y�0�&�����h��#��3�r�d./_���M�UXu0LnXQ�f�����������������~�������+��<�<;|q���O�i@C�������\n�����2�&Z�r-8�F+�2���/��/|���_��r%U����b��!���V�����d�o�[���Z�^r9C�����*���1�W����hL��n��~�J)����Bl�+L�sqy�����������W�B�j$�H���P�n�*�����;���b�Kx�������[�Q���'o��C����u��y���Q��9���j�����2c�,1��!lmq5����X
e9��o����I�����h[���D>�s�U6(�Ei�`�'�~����[7W{{@?����+����M��Wq�,�{3w{�%*Z�����iV�X~�7�?����.�@�O�T`!��y�s�������$�NGo�p��P���e�8C�������_�)�g��D���R.�R��?W���J}��"AZ��K�^��r��������Ard&I��1�����C�f��l��-��Y��o�)i{�d�}�t$p{1����e��5�����������}�r�\V���I��d����kp$c����o�Z�
�|~����2~f������2f��j��	\#&��(1�x�]s�a����
���	�d7N�ex�R���X��0_"bu�����vf�<�!���,�*<p�/��{�kmS;�&���K\H)�b@J������0[�neI�2/�z�p���U\�Mc���y��h
�r���3����\7��q78I+���	��Z�7��+����	]��m�R
m���s��.K+%����� �q���.7��6�����<D��_/��vw��g���G�+�@|��a��1��~����8��e��{���G����I��C�;����G��aQ����%��
mm��/��Q�o���/�{���6�Q��������?[���a?*w�%�eZ)��`����@N&+I���N`�Y�gj�0�(�	������R�_�KB:q��g��G����5��l�`(���������1����I:�!Z����<���6WoWuJ��q��:���-X�������y2��lN��MG�6��L�_������T�;�.Y�y�Au��x �V6��S�c��b0�����{k��R`%����[?�V�jGc�k��,*���r����o8��@_�=���9:h���E�+��W,�^��if4�PV�����4����L������2r����@��h<�jn����>=,���dI�1
�$���;o{FNM���
�W���v�fC��0����2h��seA��Q���Z��q}���7�(d��7q5��~�t��S2�<��w>2�fc�R��X���2V���tgs�/q��-qi5���Ud>E�gP�jr�X�����2|�����7T�����P%�nL���7��e���
p�n���yQ6h���\����`�,�R$.�����ExZK��gHhu�Mm��bu,�I��|I��&���������a"h�0�����`�,���-�<U1kf�@�w�kx3d�,�~Y���)=���}p���������G@��������U�����z]��Ya��`F�%�R�����*1X���E_t[!�UKcR,y��(�����|+��������<�nu���wk�z�/|����0���F,X���E������@zD�V���
��
X��tu��bc��e��o��j>��$,j^����m@�J���-L,w	+�2�7���vgw���mhH���/5�k%&�X�Yk�|���;�����'unj�r�tu��*�~o<�����[f&�����,T*A^tM���'?���������_�o�4�>M�[�V+�8��0b���B�[Z�hK!���O�'\�XK�1�4�Y��d`��d ��a*�y�e���b�F�������.�,�t�~�������)�\
N���%��<�3C���gR�������ek�x������x2����Nv�!�����~E����y��Q�l�4�z�
����Q^�Mk��_2�����
�I�Jm�(b�����I����/������J���B����zR(����5��f����o��m�U����������E��'�� �K�<������8�+�������8�9?��?&s}�4{��(��Y��.k�2j����
&���U�KR)v�� HR��U�s0s��������V1��7�nz��f��U��)������h��5�X�&�O�r�F1'���EKd�1l�V���C�n5������Ti�'�Xp��S�s(Lt�
I�*���c�>�\���0�9	g�T0o�,
z������D���:J�AJK9iP*��
�c�����[k-0����)*Q���
��L�r#m����3�������8k�W��aYkhD$
�����TF�r���h��Pb�J��[|9`k�)���l�������*t��X�	���5s��,l�z�/^���_��Q��S6v�+����U(�(A�cL�o�V/3,O���D�����.�
�d��>r�7V�w��AuYl`5`.�����5���7%<T�E�����M`j1�yM7��7-H|[�B����A�&^� '0�9!���4���������ty(����f��f��Q�k���rN��j�����3�5V=�����M�z�Hj�xja^@�DW����a��I��8��G�A#�U�2E���T*t7���G�2#��=va���%���yt�AM�u����U,�4����.�:��J�|A���r��y�������y����M�z�7$}���������[�[������������y��������o
��?��?t�;&s���*����H0�����������-������q(�v�2�5�*�E�F����1�������/8���u�����.D>��;���e��5�>z��0J����x����W-�
K�*fkK�*Ay���z�)�t���������T����b5�[�����3���A��4Yt/j5F�V�V�Q&+��\.�9���7�8�CW�`b��enC}/������4�
�����.�x���=��AmB�f��Pm�
�](h��P���	�A��u���i��P�`�V�Ik��+t��>+����a�����1�iY4��_#������^zu�������������(���g3x�0�I���S��e?��3��������H��zA,.���^��Z0{���)�)�>�q/���8�����&�MK�b����{�*���������H�.�^
BV���a�]1`��T�����0��h���;�fQ.�s�(���/�wQZJRau�%x����os�`,���c!�c�����:�WK��s�^����bl�z!����`�>Z��Z��z�����9�V�����EE-]�/|R�e
����W��#+"W��0sJ�K���oM�{qd��Wg���gc�n�a�G@�V�����	�+�x��7�I+�D�v����Nf�zq~������k&�\)�9q��T��}���g�n^v�������
����������0nS]`��.��2t������`B)1�r���
����Z1\~����H)h'�����M��Y#���r���	6V`�[si33Y�_���r��+^o�"�������f�x������}?���6yy���r�<5Z�����b�����u�]9}[q�������������x�CN����M&������#�����Ug����`�{MMayGJ~����
(�B �2q��
�.S�E#��6�S�5�KYd���q���&#z�\�)���,�����Zs�b��1sOw�������$�����.g��g��#x{�~4��� �ov���T��U��<�B�&i�l?�!�5a�9sC�����3�.����[4J9����<b���^(Di�	s3��-=���c�V��Z�6������e�,�%g��G5���^�S6-��4���:[f��vupM%p�?%��c�����ce�����=q�2�e%���[��e.�g��)i�u��J��td)��������b����Z2N{�r��2C��K��4�jEh��|�;k������&�i��J�3�D1���3>o�;fN����]S����_Vt�t��b�F���'�9J�p�c�^���$-����<y�J����<�	YK�%�Zv�����Gc�u�xaK��<�V�������[�<���G��;��)Q�YD�Z��K�1%b�D!�3���T����bqa��~#Fi�1jYT��������-�}��������>�Y���������b���m�g,��_^�����P�����W�hV3l^�wh�Ai�?�>�|�xs�����ztp�~|����}����U��lr���SE��Y��{�ij&��=�wyvpry���?��A������#g���\�����x����l��q�G�Hd���6�+�%������ �5?wz���Wq�v����O�o|(S����O-����X��l����\�V�����������
R������z��G�^��5y�������T_m*d�s'?�>�w><=}��3,�W�
������O���������?����}�`��`���J�M"��f/A�2�KO���p6;���
���xn��LD^_����$\���c�a��d�L3��S����9�<8}s)�|9�v�����=�;�n4�>
S���Z)�������l�������}����6����y	����6%�����$RZ�`I��=�:�cN>m���y�����o�����~y���;�,���%i�*-��3�/�+@������]x�D�;?�h��������w����Y���O�������`�7j:n<������� WA{.D�s�N?���O<�������� �'�\�)d*��r�q��bu��5������mFZ �c��aV��_�����G/�=�����l������O��?�W�����x~���m{cW���g{w�e����j
�?�����r�����#1�����G�_�Lvf��
n���/��&�/��eX�g�l>��\�z)��DN�[������o1y���������������OO/~�������������d>���N�(PO�|��"�E���?���~���h{7�H*��x_<�����|�_���/�����>xrXz\��M�D��S�]������z��
}�����j]�m���"S8���[+��{O�����d�mm�V�)3l��+�Vsg�d,
D���]4����@�hg�Z��p����lG�������|zp_���pr���Gw�>�����T�|8�;GV�y��<R�q����L������"�#�K�����7b��6�#[�	ju&dpKi�-X�����'wv���wo?{}������m=}�����{��[�2VL-SS�
>F��dM������Mn��
@ �ai\����E�a\��`Nx:��Y ��?�����}Kyd�E����w���[;��>(?+}���VV$������wvH����9�%2���n,,����sR�t���\'7�=zy������y$_n�l��x�����X�����7{����t��HP7p��a��y�wLn���&2XG��H��H/��2Z;J��y��yu�u�#�~���_>��y�~���� �
%��9t�P����������T7w�=9��Z����
�_��U��<�,�����hl��j��G���WO��������:=}v������<�����M�}�Q-A��S�!�6����&���h�8�i(�n���E9HvX����wH9���nPnc�qg")ed�~'n�=������o���#����I�����rc�g�Mlov1�����,<s�2>.^3�p*�������0u����������������u��V�j:�Ok.���������~���x�����Ce:O��.�p��B����tZP����dUA{����2-�`�0����AkG�x������W7>�O=k���)�+�L�6��,��������4�X�C�@Q�A%�����]��Y�O�:;���������������9�G����O�7.��GA=��|
������3	Z��]������Rx�2K��`ar_�i�8z�w ���{��������u��������������Q`#�`|�r�$a[A�FjkA�4Dz�n�rS��i��eP�d�g��"�d���8y8y~K�P>=)=�f��]�40g�F�.Z��0\�O ��&(
r��D
����x����Q�������K�$:��U�8[���K���X����w�bK��2���h��0�K]���]'���c(��E�Q���>��{�����[�vw?|�}yq���3�P
�TO;�S����a�4�%��Qi����� $��q�wbmJ�i��)�ye[k�0���%�6&����3a6���}��������f�&���d��������D~D��+���|��t�"Y�;�����*�0)j�N�����W��O?i~����H$V3��W�9v#���p��sw_����������!k�,�"�@��l���,!v��tT����z�����g��=r���9�����;�~>�������_
{��y=
���
x�,/fwN0	E&l���E
e���YaK�^)�T�#f��w E;~������;��K���{�����>�:���H��B0�Jg�8��v�6=�i�)��2�b;���n�~i�����/������g�s������w��f)~�(�n��>���3�x�������v�s0Dw:�O��Y�S?;���c�<`���f%��a��U*�H���
rw���{{w�>~:}k�>��������;mc�����R;e7�S(<Ai�-���L���R������=��������H�������?�7���������/���f��/�|�v�V('@(�nix��
�B�$���
�q���bv��,A��$S[�����`���&?�h_�wy������{�r����;�'w~����w����i�LA|jF�`"�xcs�(����R2H�P������XS^��������A���q���������7�}:��������^\|�008s��������b��O&�u���Z35XR��IC��2��Yn�����F��rr\����|S�n,���^@>V�E��d��������r�����W��|�����������>�|/"�F�)E�r�4�?��ke��x`�#�����h�8ca.d�:��))��
T��\�OM���'�_�8:x��1�/����?�����_�o_���bv��+%x�H��t�r�����ip �U�C�0Zg�,��Lw�����������������n�a����X)�\)��������iG���t\�^@%�d�<%��8*&����S���������[w^��������������S!�T�#��3IQ�F��������;��`�m�F�����$�w?�bZ�vT����	�8�cP�������G���������B�#nQKan��"Zs&��:��@�*8���c��Yt����:������#������'��������H@g
f�Vl���_�_�ZE1���@����s��(��8�^����Y(����5�%^�����{Gw����/|��z�=�~cO?�3#w�)��w�G�8�S�P"y��\����OM����#�XU�N�z������w�sZd��w[�����O[G��������V1U��4��i�,'����''�����<�.���\�\�.� ����x���t����&���{?����n���W��|��j|��>X����-�\D��0?PoZ��Ny��3�����u�[1��<-�~}�������]v�������O�f;�{���G�/�}�<{����/�\S�X�d�4U;4"S���OF�x'�J�*5>�sA4�1��E������-��~�9���|�O����/9������=��?
��
�J�U�5��v�o� �^�%/K��7cz<�FW�2B�����"@'����������^9�3~IEk�IP��n=���5��!�2h8�v,�$;���n| L��tg"�6�<�&�sP��/�w��O����~=:�w����}�������l�d:��Ff
&��r7�U�-�6`U��./.�)(���Hin�_=x�
Fw���:|�S�Y���w�{p��y��GD<������{�N_���<��b�����d�=��<�u���T����`u�6�hY��������9�����wE���Y�0J:'i-]��h�p������������w_��{����{��9�s��qZ�k=� �-3>�-�E����5�a��F�aL~=���z��E{�����������o<9���?�=��*=�C�1_���gU�����#���r�s0�&�~�Q2L
3\����z�������/�������{��F�,���^i���5�7�'�J�&g���<0`����{���\e�f���J3]���u�=���Yy�?�^����[�[�K:X�����w+K����Q�������G��}|$}�t\$&r�������~��1���B&��}�xx:<��<�yq�v����~�}C�H�{$p�� �@�	�\�4F'S��1h�t��E��GrB��L�� �����6�(�������u�����,�Ux+�0�d_b�E[R�Tm$�V�`{w�1�s
�&�{���@�&��a��Wq��������l�;�o���k��8��3Y�Z�%�t���;��F�h��w��mr+�S~>��z<PT��G�o�Y�@�)�������6�?������sq�+[b�,I��������:��b�S�w��T�s�Im�K)�^uZ�����\�=�8['/�v�:��b?��s~�����E���U���6J!}����Q1[/�&�U���QpF@����	�����T������o{�����r;�(�1�����������h�������Z���R�K��H���m�j�^�w�JO@(�������c��^@� w6L��!B/=4�_Z?G�3���9u�3����0bx{UU�
���}]V����������-s��������ksr���$�8LK���'���D�&�A��%�]B��\]P�:���~TP�yUR��s�����:�/�[����E�b�����Qfx�l�z��o��{]`�t���h&�����_s4SxBU�f�H��"b��g�o?w�����X8���v	������83 ��zYY��1��O�:�@K
��DKFB�&���2p��J�*��&���a������xV���@mn������!g~|�6�\*',z���(+W���4���Ap^epF���������T�1�o�r��d�yy��~�x�o��
�7z�<3�q�Z >�Q�>�C"�}Yu��8�#g���������������=�+����E�kl1X��%�+�3��@����������8���}>�n`g�W��T�U��m���[9������|�8n����~h~k��?X;��v���n��������Q&L�_#��~�3r�`�`�2�Um7�D����X_�<jv��������n���u9�0_���M5���&��6m����e3JG����� ��Q���3$�-!������W�*#2���p��������������>����������^�
Dc0�iIH����������s��0�xb��)�G(�N�Bb7
r���c�L��CZs�\������Kkx����a��P��4Ar�@��p�YUB�7�Zy&gs=SC�x���p�yo�`�`u�X��z��x���;z����YN���tB���x���Kg��;��z;p���#HaM�
��k+��Y��?n?w�N���J��_���ONtwmU<o������V��%%,v�p��t$��� �������#�((2������<�?��MT��f��*�����v��O��������������=�<���; -Q��R�y���W�����>h���;'������2����*�|�������G�'b����~[~���`o�[���[���JVw=��&��xg�Wa��Taj�.(���������X�d�����vp���y�V���xO��H����m��H}���<�*�������s�n�������������8X��%O�.���e�3H���S�N]2L�NI���`�\���K0>��):�$z��*Q��v��:W�V��te���m�_�����X�?(�d]C��s�u��-Y�����p>t��k�����Kq9Cp�#YU�����pu@^ZW���_m����8���V�QK��(��8X�%S�5���;�&L�*}����`��f���}�������o��������e[�����{������%&�p��GN
����q6����^��X�3�e��n���K�i�(�����7����u���M�=�l��_W&�����[���iEpw`��~Nx���<��f����PZP$�z��O����T�����[�<���/���!=y8���$���;�16��K� :G9�b+�Q���K���!k1�������
��U:����?����4��R��[�C�
x�[���Cj��tb��7g&u��pO����l�x����
�dou����m^�<;<����t��GY��`<+�i�Kr-c��7H�N	���������H�,�]��\����{Pc��U�����sw���'��\]9��O�z�y��n^;O�[!:(6tZbL�5�R��=q�h��jj&��?�� ���5���	-��_�Bfs�(��"�t�7ln�6nJ[�E�b���k�s6,V=����I&�x���h���*(���
��*+�i����|�qs�?���������|q�)�&�X�c���G�z�$i'��|���(C�E�G�^����O��#�~���R���3�i��l{����K��{���N���c2�XU�'�������Fh�1?��*�C�"%��"�71�U+�&���_eU���.�s����������Z����q��0MD��KK�U��O��&)&sY�5��VV�&�BQ)�&��/2nX�iU"��������v
��i-d����9w��kV�Y�9��W7�O�i0%���O�n����������������ks����K���$E���!�D�?�L�?7�B��dMFc@f���dt���+�����U���\Gg�������s���O�����Sc��E�L�b�R4�=/����K$3Cr�<����W���m�z�<�t?�f,�cM�LO_�|������v��|q�P8T�I%���c�����3�{�C��q�;�P���P��V�3��-�'� ��3���SG6����G�����=#</�R�)W�����+;m�U��4��a��K�I���YL�GLR�S8J����n@/���$��X���8LcB��&��&��H�f,���p��tky�����y��6�*����hgx�nk���+�if���=�G+#�r$B�U�h���_e��TN�\ ��~�S�G�)������A�ld�V-qp���V+DS�6��KR��d��K����eM������4+p	��=� �q�T�H%���;J*������V�(�mQ�����;��f��V�r�*�l��6NAf��K^:��Q��@!�����zT���iQu��pZ<2j����xf&`N*�����	�&��:�'5`����`�^/������c�0���#�p�%��<�g	[]L�4���p+`{�Jz&�������B#/`/���*c27I�����qY���LV���+?���{����������������C�C|��
��\�t���G.j[�����q+�;�`	�R�,�a	-�1�M�������A�kgk����������L�����U�|�p���|��Yc���1����e�*�q�>4&�dMk�y�"t>L�����C����������/R����g��JQiS��Ky3���
��5���������`E����"�VJ�2�p�C�*R:
H�����~}X�_����l�f��6��OV3O�k����_��s�0��Xu2�]$������A.,a(�iF+���$���uH(THB��IX���Dy���>5~.{w����������*�p�R�����g�"�����0�W���9��8xG-��Q���������O>=���tO|%���W��KJioa��uF`�r��,�2����b��|���V��������L�P���:�`[�Kg|�}��cTct���Q���j,�z���9�
7	/_����Z(K#.�"f��#�����X��r�eg���e��m�������L~�A��?Ai�n�18��g�������1E��t��������+�te_X���f��s�-�|e�N�t�����U����g��VQX�8��i�fg����W�hla,v�H,.G���>���|�����YD��\�p3�=�cF�O@�S2C/<M�R�`DK�����J�J�l��K9zy�U����J/����b��e}��^�L�=�a�;�f����~y�W>�O^f`��GN��;NvE<��?/���>5�_�W�E���/����?[�T5blbUF�q�N���W���9N���� �K����bcz�6�~��"�$��RJ���G����wY�j�q��%���re2���������w%�.:��0�p����"7���B��a=vz�������������s�� �?�?��Uc�����3rUW_����y���R)�
A-����TkU4.DL`A2Y�p��������+7~���������~����c��P�_�L�� 2��
pLLREj�g�b�'����0r������8r��jEm�r.��hR5��hd���bJ�/@���U��}��l�{)�]��=�\�����J>����_B��ba�g�>1�v����� u�>QH7�	\"@���>����.�~�^��>������(��iKT�P�*�xD)F�6S���#U�1��@u�n�*2�����Y�Peu�P|�b1��m"����xo�yoev��ce�M�������4<{bB��k3?�k3A�����f9t[�I"�d��S�y�;^�2e��V�Q$����&~mM�`�g�m5�������\b�{s��H	����{s5=���J	j%Na�M����ik�c,E=�C�KH^O��D�	�f������&�-m_�rT��:�p!��U*�@
�V��l��HIgJ^���,G^�I�q�p�p������z/��m)0����;���O��~/��u)0��>3��3��b����)��UX ��R{T9)��3QT�1M
2�,�����#�y���>F�+YS��?��^���&��G��>�w�U������Z��J���p�6�e�Q�E�GI��9���,M�9�d9XNT����L�������G�p�2K���0a��X��L����`gWZd���=���� �e�)�x$���"�I�<B����$'N�B�����*�l�������3a}��Ol�o{Mr�i���a=K�]���B�,V�
����Q��%�e���k����p/�����eK
+�h�������� �sP�;cB}I}'�Q�CQ\[��(�6������_�(@^��V� c��>�4 ����5dJ2���1)��c���M����/��$�1|L���8_V$���fc�gW���	 G��E�g�e����l��f�i2�f��8�L:e�x2J~����2��R��G�8M�9����\��
�W���Lz�K������?i���4���Fc�����������s�j�{F�\ -?Hf��Dr�3C%����0'�R3=���W����������A��`��_�TK�������%����E��h��PN1.��_<�GiD ���3����y8h
Ib8��2E	>��8(��7�@��W��r�TF�(GZ�e�\�1������p�g�Q��(ne��f�A*_A���"zP���_��j!f>��1����Q�_^��>����{��>~��z��xvs;������o����c�6��qk<�_��3�n�������q��a����?�g�}�r������T>������V���=�xP�YWT�E���|n+�&H|s	�D4�� `�����9�����w��,,_v��`���h+Q��:X��)Xx)<�$��c�	in��wI�d{n�o2��"�����Ma�,w}E�
����_k��+j$.�-�$���Y�I������Em���[vy2.\B.��X2�q�_|4h|-Z4�1��k9�.�.`�Jc!����0��f���1G4��f�b�_��p�WQW�E�+�����*3,)�'Z�j�����-��u�P�1�K?����@9�V��;���'ofZ�aC���))��V��Q����sI�?����lm��])7�c���	 @4�����(�]��������Gd���Y�%CLJ��2��#<�=�?�����B:Rx�|mj��/�%l}�N3|�[T|R;���-
;��"�v���0�}���%����3�����\;���F%�m�S�j�@"�=qmQ�H�X\�^�������2����!�7k��#��C����Dh��Z�o����~c��~��T{�M=,L��0���N�]v?�[��2S9���,��m�4�E�wLSV�����������o�,2}��m�"0��uv^~8*����%���;�Z�0�m�����Vj"/��2�����/r�d<���!�k\n=}���-��A�m,8�e*�!�H��*�]�8����o�P��_�����!�y��� �y����/IN<b���x�L��������m�v Xm-#�a8������msb����[!���[�v��2Z�
I�/�5��":��h����Ok[B3�W��5�{1�T���n��H,_v�v����xe@@���{���d}�������m)��"(��5� ����|
�kX��]�J�-����"��k�����/��Kl�#���5`�'�[��j�*���_	7�\�0�����+�.�/2,�a�W�y���������)����M]�I5g(oT+#t��jW:$��9�z��fX�4�d�`R���i�,$�E&���!���F���F�%,c.J6\.�B���	d8�#�JC�Tf�I��K&�H'����|\[y� z0�9�_.{��d�c-�v$g���+�^I�Q:��^�J�����Td"��C���d��[������1Mm�?��Sk�*�����i������n�����|
���r9|x������������?�O�������s���$��l�|�H�!�E]���L��
G�j�T������j�=��NS����N;M�gb�L�L�1Y����4z"�d�`�&W���ZkmD���H�����p%D��$�%����X�J����Ys��aA���m�U�F���]'�{���V�;�|��RB(	5��Y�
���t`w��_�>��$�I�����VS��S�;��k���c�-�;\#�%i���Bon~u�"RwN�"jE�����H���_]+}A����cM���%[;p(\��V&3T�63h������H�JP����+Q��aOO�I@�7�$�Q1�O��pJY)��9�W��L��n�*��q��BE
M�n�����'@����h
�d~�4r8)*lh9=M4a	Z�s\����#$���`c��|���I7�����r�(�t1'(qL]�$�`�p���"}[�`�������B�0�,�P�~�����&������L���-s��FD]P�������������Q�O�9^1J���4��LFC(��M2
=����X����$��$�!���d��*�e��9�~��\������W���M6������k"�V&��W����[�����6���?���e�����'A��P����j�����6�\���r����5�+)��<�o~]D.��%-S����I��eW��[�nQ����?]�;k��'��o�_�2;���������r��r���;������'�������y��e�����_H��������U�.YQ3��h��B
T
SA_�\�4[e�b/��c���d��	�J��`�o�TZj�RF�Q���&9�4���s��e�-[������������Y%4+�Z��L;�<�WxU�A�G��p9��M�^jh;�a�1�����0�5O~���6x����U<f������|3� �[���Q������'F�"4X\&Kb�������+%E���	���H�)b�r��q�Fp�'�L��\R$3�`~1`&�V?�i��|f�����E�!n{W�Ne���Dc���)0������
<ai��X�59Y�d�9��`��D��It�2	b[�Y �lb�'��HD}������������o|���L��x�1��(��F9;?=>���0�[	�P�f�"�2Km������e��3��9���ZKA��O�O��:Z���.�
�#�����7�5�#BW��d��4�m����a:)(v6�p����k�Apz��i�rf���L;�Pq�0mj'�����"x���~�&��7�5~�q�R��������������/u[����7��;�������t/����RXV	�Z�����8�jO���k�/6�Oj3���Y�������|�p�����Xc��8���>e��&5��^���A��B5����@*��W,�<����I�����2`v�ZF�&D��4#����R��%�r����Y�I<uV�
���������GS�c��a�"���J�zR��M���z��*�\N�*C@B�����J(��50���bFe���E�������-���n��R�R&����
�
;{I`5�Ezi��-��ZIj^:�E��Ti:g���_&V��Y���e!TF�;��W�
��C5o������t/��o��^���nN/�����rz�����,w/��]������	
�R��T�������'�q�+BbIB����(Xm'�d�,b��[unIB����h����+�E�B~�+��?��K�Z�[�V��R�@�C.���-��$��
����%<�1tf�|z|X;�!�: ���,T��D��q�oQ
l"T%�����/����$�"U�0�A���z)5�x������t��VW�>���0�Tb��
�2Vy��n���Z���v�
���0Q�_��+�UWv�N��4��+,�T!��;:	�X��:���Y58�i�y����s��-B'zS���A�`%F/D_��\�D�p�gY|zyz�s��S�����J��#xu�w<���6h���HG���OG���2��R�C`�Z*zg�FE��.�4�(�31�E?�y�*h��0i�����y�e4�w�{�zlx@��L��Uz��Y����i���d�����`!�9�OF�A�\%H���J#�a��%M�ic�H���<rZ��������r�}u��j�� �:K"�B:,	��"��b�Q�X#�LJ�t��b���w|S���"�����S�fs�E���%N���G�U�P�A����I�;�!�1�1�Xs�m ��(�������="^�����dAa,�lF����!����y��v�GVij2"������=��nY��`I��[����;�B)I��1.�"�7�8���:���?�����W�t��������"t�/��=�{���v-���G}�����r�J������[8��n@���s��8�F�a�+z�/�`�?�?nN����+z.�|_��f����23���3����?T��v��Q�x�3��g������?�����Q�?�������_���7���������D���D�*@������g�f��Q��m�s�"�C��~����:V��������o��������Fa#I���J#6�t0�����Y�x�ut8=�.����v����)kV�v����!��2�8+#��7U�T�@�R��J�,�1g{��[���Z���yyz��5y�����o�X9����5����j���c8����3��8[��0�h&BUj8�ijf�\��s�����,X����V�fF��T��o�n�l�X"�bI�1u�i�W�PU���3"�sa}�3b���`�`6�>�G5��$�
�@`Q��oO{��t��Jv�-3�e�������H�8_����I3��#\�=��=|�}wm�_��dUgd���L	�������<,otx���9��0�U�C��^+�T���O����\�������Q��|����e�#�	6���O���
ZQa�g}����yfv����x5!QU(�z�Hhvv
��C)Y-B�������Ar�����Q�-��(�9�j��PZ�r[�s��o���+�>�t�����(���^�IM}��=t�������/��^���[H&�C�H�U�c�n�z,���a�5`F���	�T�TZZ8W���GcHf�!c��l8�1����-6��������n���R������l%Xy]<��2V��f�1������U%{F�>Jh��Mex��
�izL���'�C6��:
|�V�����}V%u�:���0IAyk�ld`	����p;�`���Gw���6;����;2��[<�Mn���\f�x�}�B3N
�I
L�6�(2�e��u!0�m8�E��[}3�g�FY=H�;�p��O���4}����xn���r�n6�������L>C��KK0$/������~'�<n���hW7�8�B����s3'S��7 �xF���tCn�|��RQ����a��M������gi��%S)��Y(�u��*%�D8#��tF[�;��n�Z`�ZT!��z���>0����)x�������:	�!��v
�����VV`n�a4X����0�^lj�T,}K3n0t�#�
A0
P�h'�������F�p���,8���C�4����=��j#O���*+�bz�<F%h�y�"���#?���������{��*����-��T�+������lfR�L9[����1�t,0L���~o��������3����F����������2+�Z�q����fq�3�$��)� =���92k�����	�j�����V�J0X�E�.!������*8���0)u�m�:U�6G6�,ZE~�������B��������6��
��`�7��lQ=D�0T����AE�MZq��}  ��6��
�j\t4��f�����-��PiZq�l�R�X�����P�*��<��,�\9� ������@�d���IVx
�<�m��k[�HE��X����@�3��������&��
�Q=��K����u�M��e��G=�����[�����A����K���.���7��������ZK�yBx(����S��
�Fu|�pq��`�[b���|��T/{��Q�B��NF
�IAF��#t�3|I�o��@%.�Gpk����0c���,\,DbR�+�L���&��7�j|+*:�L�*�eb����>�Z���t�C�'�-�&��k�y�[�����2���;Ta~���6L&�P"��DQ=y�=��.��DW��P�/i+����2]t�&��O����A�	eF�G��r�H+�G�~��y?��D��/�Ki�HR��h����H������&��������E�����mI�<=z3�r4�w�d��������_������[����e��?v��?3������J��C��\K[xi~�v�D�k?F*���H)RnX�*��k��F��j�,�`�����+'��pUg������t�<�Z�!��:C�^����[�cf����`�O������Y�E�l51���>L��$�=�;��V������B&��
��b)Fb��Z����V�Y9+Z"���rsM�����Y��YQ;RERg�05`Bb�,)
���cY���~��^mK+G���(���d	��	� �T�UK�yD�����t����8�*f'Z�D���$N�u�Ys�t���Ht�**����i�/�^i�J��s�%�S(�����3_�#E)3!��[KXP�����`Jwj8iH3�EJP����oz���oE��<�Uf�kX
Q�m2D$�z��d7X~AHciYx�����PoG(����RpG	jGTq��P���=Eb�p����u_�&�^eG�z�a�;��jg�5�r��"�JiY�]��B����	<���1�����_���H��`]�0��N���w��M��?�3����B+����Q����Ze���� 3i�D.P�j$�Z�|[�*�q��! a�����u�F����.B27�0:e���x��@��������/������c�s��d*�;6l����(�$���A������Y�i�G$3b9��i�[d*R�4��g��]���o%�}K`_.X� %&�2I1�@�2�,6g�b�+?5�%��`-E�Y��Z<�,,�����"%2��\�R[��^m"���E�
�*�"�J%`�/^������J*�����&�5	����[�����!�A���	�E����
!��]T�Z�E&%
T���� w�X0�����P�2��� ��U�(lHs�gK��+j�A&��jX�
VF��/���P{�h�mq$�h���NNJ^�$z������jq�)�!,�� �j��J�j��z2$����|w������4I\*NdPr�2��**E0Ew>Z��f������j,�8��� ����8:;x
NCT��D�0����jP�`y��eh90�h��Z\}���m����,��\d*=������4M������h���R��� @���
��d�D��2Y�	3�~p�la���T,,�1�[��������'��K[�f�O��H�%F}�*��"��Zt�F����iAG���������KA5s�KEk���k]���S�����W�]�1jw���z��x)E��c2�����]�k��P�WS:p[T6)�y1��A�����[����pL�.�HR������:Th���$�o)�����b�����LH���#�����4�����z�oLl����(�C���>����1p�,I������/)X���BR9M&�f��!�k$F$� IT�ggG��$Q$E�*�k$�������s\�B�I�4�X�Q.Cf�H�Hn�TK�)����ia������$���DV�XT�8k*�9,��R���-K���]�9��;@�c."���UX��R,��v#��4M��`�:A9vr �p	��/�h1M:RuE.���G@�s�3�#�:F�	��.���I}X�\:����|���4I�#���z��/��n��v�8�D=Dy�TG%�y�u	��������y������~�[� �������������i���)|p��{���@�j�����8>}?]������p�p6�2�
�-����`/���=K���R������3�c�('��#�������e�!E��2\����!aR��Z@��(joX�UQjo��s����63)�T���8�*��b*p�j�Z��#$"�g��74���
��nI��8V+���oI����C7f���ISE����I�@�i���z9�Utux�YX���u�4�K"�T���`ir�#��Nz/���������f���HKW����T������E�`�N�e��}+��RN1�����e@9�rl�$Km0/Y]U��m8���b�d���/b�v��8��M�4F7�[.`�	�/�c�Z�_�Y�������)k9|� ���_\eXL��
���I�H��Z�7,''3Z�9K��v}�����pi:$��5LD���r6�t����5GG���6�%Am}�J�f���IQ[M�h`oy�s�&���>9c>[,c�
+C�A����Vz��D�L�%����;��a������G�|j_�6�X�Y�������t�G%��
�p[�.��7��ma��M����*��K�PVs,�
����hR��NM'g"�;��������w>�� "�w���O+/Dts:�<i)(���,�N�XLR�h�>�D,�24�/���J��E���9b����x��ge
�0���\��|zy���vZ=�������w6��}�G������{�����������������O��_����6�S��YJ���
����i����=��
E�Nn
��q�a~�h�1���i��!vh�Lr[���p���cP�(��H���Z2J�}�������4J�s�������5������i�B�4F���F�	�VX�{<�i�
 ���e���P1QKI���xXRBfV����e�������og������>���B��9�
�(O��5�~ e�O*����J^'\b>�}������q#k�(K*h�Il���T�:��k+���r�PnB�^��\�Hi��K\�1%�\)��
��Q���`���1��.�&%�V r�"$lc/.������������e�oK�p������ �
�B��EgFG��g$U���i����}K����"$�l����7��HjU�RK2��4u*����O5��J3��z���Y��Y��.���59��}h������o;�>~(�o������7�"d�+��\0���$W�RV������r�S�����c�?ex������'�'��u���������g����g�P�\*��.��n�&��F��.�U`ddp��g������������MM�5�e���k��7`5�[v��@��u����06��cx�/�r����_'����}|Gq�Y��������9�@�1����vX�q0��%���-~�e�w��oK�����r�*���W�`�)�[W�)S8�X��<���.�]�"���d�I����[�D+&��S�f�J,�e����?\���p����}at�:�D(�Cyqo�Gv�������&���1�:��&�d�9Q�cB��5/|K���X4�fj���M$���,0	������Nf���Ce"�f��Sh���A}6<�1�����������P;)��0��8�2p�[*
]��s��P����f�h������!��N.7�fd���R?�1����R7���	����Z��
������Y���#�v�%+��!�u��G�F�f���K�j���fG��TK��|�
��t^E��y_i�a�� �A�����(�WX�m<���q��z��-u���;J�6��h?���t��"@�4ca�����Q���l�"�+�h���uB�if���Oc�x�bS��[!��{���p�����	��O�T��1�|���:*:#F��]'C����'�ma6��0��B���?�M�� l�5������'��/��6�?X�;����h��"�6I��P��2�k�xe��e�5e:�rXR����aYu�%s�}�Y�|CMN��h�q.���L�Z#6!ll��F\��=��jh�g��-y���������l)�C1	�������E�EHi�)���X����*������a�7#���Z�w2�{A�����k��RnY����|����E�?b�W�q[�����9	!;�$v�1I(4o�5�F�x��E���Ph�VS�	����D�o�fj�V�>���o��7�a5��2��4����2�fR�
��vn��5�v�vd7v��nX�:_�\c�����jD(�lD�A�B��6����8����@g�ZZ���1���:�/�����6�L]����PT�x
C�-���eE�#
t��6��BJ�E�L��3�Up�����%����k�x���E�W�����<���[HN�^{}K�l��I��D$z�4Cp�%����IM�kY;���f���`��S 0�HL�(��UB��3�t��hS���UJ&�HY�f�����)AsF��[�v��rJ
�jdRU�)@�+��,�],��s,����E%t2�������!�
�?��av����o��V��\�������~���5��Z+W	��~��?\��O����O�Z�l��/0*)/S��_��3�~��N����LW���<	�7�Y��8��,\����Pd�b�y*���D�9���(���2\������;ky<�7w5��I�����U.�-�f'N@�b��zP����(�D�!����"�4e!���z���t� ��P[9,�g�+"g�*���9�E���A������h��(��tW�B�T����2&�mM�C;'\]��8F@A���6>$���G�G�E�-C"�V�+O�E1�jz�[���;���\].,e�G��i��Bi$����(F��}���H�:�k������\b
H+<�H^C-��D�|X}u�N2,!TU �d'��F��x|%�w4Q�[Z �Ljb6*����O/.7���O����4���w��sT�������O�j�T�����*
U�!�|K������Zr�<�TK	/1���-}��+�3����
���(�|KC�)����&�O�.j0�&�����d!Q�K����rI�����%z��8����6/�-}p�u?�Z��w�E*�y�U��[���Dy�KE�?����H_"rB�#C|[�$��j����O
;�_g����R�D��[\D7����oK������������/���]}��u�{�����1{�x^���W2��O+�w�^*���ZK�
*�5i��\�Y�a���������F����0�����9�����_�9��b%���6]"A��VP�^�xi�!���mX���m���������[��
`�@�F��i�,��@kBi����1��g����g�]�T���S*^`���mo�Y+!����T0�d�o�$P}�UL�^������o�����tG�����'7��q����2��"V�oJ�x�3�P�x�*��,uJ���d�o�*i:�x$���l�;G�_��M���$V����H�%��P/��
|WV���s��������!r�U9��������`6
��P"����o����+�����_��^^]`����KXS�-�.�����9W�����a[���k��l�wt�#�w��-54����O��ki3%�
o?��O���8�d���gYqm�n��@[�e��r�/�x����r��i���FO�������4�3��bR+[K)��\�2�~�.�%���z�e��3k�y9�M4�}-L�e�+�������#N��X��u�13������m�+)2�����$3�(�M�o0
F��V�W^�����'0�0]`��=�x��������^h���&$��e&���Xb��s�
AmF=�"#r��� (e?��8WFd�(�������o�*�G��������7��.�wf��_'O�'�gE���ze*ox�w�'5�i�T�S�b*�S��ma@�	tw(A����t>�e������D��r�
�*$W��$�H���V�����0������{t�9�1������� ^pM(���d�5�gI�-��Pf�"[d3]������ml�4������\��*

������j������N2������$L�����%�|d��I�@`�K�J�aDW��!�Dh�R-~D��%\/A)W*��R"� ��X��ii�z��R��d�@r������19.Q'�w<�D�F�t.q
d:��hQ���#g%s)�8�RD	E
���e��W~k��R����%v��h,�p�%ed�S�B��{G���P���o��s�|��V�������Q \?����T�k"%�?+�O���~F
�*�O��y?��=z�~VP��
�Zo�@��}�@}�a\�Q� >�D-*A��d~Q�� �SOT#��d{���m�3\�����aX��C-��+�"Rv+���&Hm�	2f{����A�p4�Ek4����e"��/��!U�<����
�L9�q3�	�R��v������A�s��sd`, 
|U �V5S��AAO@��T \�/��"�&��!s���H
�+Gf3������F��	
��YhY�;=���	����1��1����;!3mc���������I�a�������rR�l�����J�:3��m��Q����V���x����2l��D��I5
"�� Na[����}[L��k�::7����<,�W	y��x���lA���]�`�9vQ��Q�������2��G27��m�������$�����w(�@��rR������Jw���y<t�y������}M�����Q����p���n�G�#Y�<�����Gt+�]��Q�[x�L
�3t�|�>��'�T2���e������&��!("x�C5r�II�UEe���6,���6���y��i�� c��E�U�[�m�h����>��d���p�yF>��p��y
�U�hA+�&"�|��fI"gE�7G^%U������&-c��a=G9�q�[�i6��5�'.G��o	�{*������X�/�XXe���1��)�AK�T�"���S$��k{�c%p�9CP�,L��A���
U
��V4�1M+��%�y�c
V�u�Jnb�]�A��9���{�`��M�QRdMo51)"t�9Fs2�8�� "��dNF�#�B������`Ew���j���f����H�	�[R��L�x����na=�Aq`�����%��ym��2�"	���_�a.+>w��l�V�/[j��$h����+��`l�`��9v�4u�p�A��\E��d����2(wa��L)5����C
�Qj,���-�;05_e��2��(�Q$A��8����T\�WJf�Odz�����Y�,��Q���`���	��.�*B��&����-B
���$�R����fU��QQ�w��Q�7��3ms�3�&���lf�Fs���Q-x)��R���;\�XN z�fyns�A�&WU�qm����p�tcW��JA,���9pN�K\Lap�w�j�����2�&#���z6�K���^��:��o�������I.��l���&�!�p��"7Au���@Ly��$�r�Qv=L�	���yb!����:������F��L�<��J��)G�����1}����qN$G��P��^���+]l��6����0+��t�����q�F�M��u%����������������2���]$`��(�"����=�b����y��|���S`[;��5�O?NO�<Tc�W�����M��K420M�E��[�)9(���I	��O������-���Q���T��������xm.�z��O���m�����ce��K�����i���f��U`����p���px�w���O^=X�5�~�vv~xz~x�����������^44��������h[/�t���G&�[w��]���D���$:���8N�lN��9g���8e�_��!h��;<��q����!�!av�e��(��o�,�����%���
�ed���F��UB67=�az^��
a���Y+9��e���Bi �n��tm�����o=M~��e�j��� n�IUN��;���0=����$sn]�i@)��a[���9#�M�P,3���He�tN�2�w�ZS��R��%��rPp--��.*�*�#�����%�
��I/�`�I�������#���*;
���l��@?yf�;���g������I�-�����hI[�H��N�]�'��i`�-2��p�BB�B�8�t��)����e �v;*���'0�O���OL9��kM�!8@����Q�(�
�'��N�&���{���GS`���3d�o�0.-�W�2���/� �/�0yDe&�<�,���R��I���n���0Y��!rK�1�~�$
�>s����)5Q=��K�9�~�$qC]��e`r�p~�h�`�%��Z�[��j�Q��9#5�������K�I��d��^\�sCg�#?���6��,D�v�&L��j4g2���K��,��#��%5)�7��}D&1/H��f�(��&
Z�;�:���w���<�}W'	7���{w�u��:�@s�f�"Ow�zH��K�w'[��O�����{2`Ewt����u�/�o�\)�U�����mj�k� �Z	�Fa���}�	�{��r��`��)��_�������'����N���_���%a�&(�B���Q�������~��9��������~X$�Rq��z�%v@�z���FA�a�X��j1\u��l��d�
����,�V%��s���%m{��N�V]
\�����9w��^
J��(����hi@y�d�-��������6@������x��W!���*��o�B�*��h�iN|��fJWS7
�A������Q���o�2?��V_t�����3v��-�s���Z)�����G���L��1o����Lz�v�l�����-��<�#��&/i����}-�pn���G���������Z���}��Z_#�3:�W�p����#y�J�2�%�
�V!���t��(�U����-����SA1^��S��Z�D;..:[���
}��3[��r��f!n!�K���������rx6���$y����u-��8X�]s��I��I���S������B4s�Ab��#Y���hI�����!;�c�|�d%�v��lZ�rq�c�`�uw�����;/��9������!<�������M�����([=?��������B�g
-�������
�o��Q�/N��?����3���S�����������?f��V��������o�2��F*3L����~<k�f
�W��o���>
23��/7{����JT�P6u��2��|9���Su�����?���GG�r`�<��W$���G�;��r�7��U/o��`"u�`���q%�|HZ��+K:�q����DJ��@�r��ZgF
���9���6�����p0���?���P��
��2����@
����.��*9���/(��k��8��-tF����J���1���~~k�
�fg��TF�G�[<����vn~����D\|wdd@�����a*�A*pS&���m�|�ml��7{�������&�u���@��w�P��Cw��I0RI*Sz��r^�����o�!W�m.��6^�e���[������O������/��w7^nN�>y������������U�\�1�B��0+��p�B���+��a�����`�N��_���"�@<�����&�o�
� 4A�p��%����������@�Y1A�����Qs@uvww��[P�G���B�	-Pt��U&
����R�i�v�1�Vv>�<�zrr��z"���T��(Q	.�����
�3Z�e��j���~�T.Y"�Ii�����P�����V=}+����[�v.`K�<����qB����x�������'��9(J�/��N�":N�9Y)�3��U�RV{lk�
h+��e:a&�����?��3�����������hU�sz�<������u��+H,zx�
�����b6� ��(wl#+�A}��xE����I��U������8�?4����k+�O~���������-��L�6�<Z�w���%����gF�?����E8
�?B��$�y�s��DY�}�D�� �h�,u�)�,�����1�{q�PSs����v�
������������	�E�9���"1��?���HZ�$�
9��{���[���?�3����
���F�����4u��*,2�{��������&f�4w�s���������8��r��T$38����KVx�;�Syf\�&g�<��<��$
��s��zX�#�>X�w�?�����V����@V�ILL�riZ���4e��dQ1Irn�n�����p����D�i���!J��r���{�_�ro�����x�o^��M
�������C,�����F��C):;3��{��9p-���s��2���>��]�s�4��\-�?�8��������n�a�����>���������A�y2�A��9��s�\���0��m�.�w_��3�9-��"tY�xQ�����?�F���k������Y������d�������O��Z�L���vp�~l�@wk�����g�a�5��x�6�PXF��x����sp�u���la�DN�q�~��d�Iy��yn�rX���~����+P%�&���V����/�r�����������������p�A`�U���,b]���"�"%�H����f�X���h�HSx����Mak&
��M�Kp��1����qp����7��G�Jo>�Z������!*��3s�p9��/��t6���jh;�Uw�
�232��V*�!2����~S��������;~��`��\�����j�vf�W��������M����y�����h���vL�}t���#�'��jL�����/4�D"#�a������T��a2��F��:�����E&Ss.iC2e@�f�o�\�����gF��`�*����[��w*�#��#�ds�S��I�������=�����\{���J�
�sjE��CYIH�ML�DDF;�q��b���K�<Sv��.8���'�:a�'BP�c�zNX����������q+�o�`���UD3"����,����N�����?���������u�����?�H���q��(����W�Gs����?�������W0vF�:b������K���9LJf������_*���������������������%��|���Z���7Z��],&'��� ��������Tgv��h�g�M�3����<@0mA����+��)��*7����7��*�f1�XO�����1�+�U%�+(�F[�����L7���d\�pC�j`o��x&\�	�m�]R��)�B�dB��M��T��'�R�$�8�?�@�K0�(�#��3=#)~�2���dQ�e <H�T�;'X�~<��eL��-I�RF�Hf���H�kf��Y�}I2�����Q�Z�7,5�N��XE9����$T�wK)E3�*�	,�4oe)[�q������:?�o���%��S������1v��t7�@�U����m8e����,�o������O���w'�����$C�M^�]#��P������U��c!��XQ������$�k!��z����_1rM%�n.9Tm?��������l���7��
�4��W��z���V�#�l��8H�%���snX]�e������8MA�3b�Z:��\� B6tM�^bl#UsnY]����)����fd��C��7��Q�-R������D�����
X�����V�pn�r-QW���E����g�1������U7��nQ�R%���8n��OX3�l������7���_u�>�;����,��0�[��/��]G�-����e}��R��"N��"}q��"���� $�dx��_�VmF|Q���)}f�W��R�<\�P�WRn�:q���t�++����4��
i�p�A�6s<y1�v,��&��oh�������2�F����g��#�4��\
����,x�gMA$�89@k-&,P�-Cby�i2{��$��"����x+��m��X\V���K*2ek�6�\��cB�YD(��$W{����5��FMe�������������aR�s�<G�|X����,�Pi0��L�d������������������q�����-�Y�Z)
E1�����f%��<V���n����Cp��3k��L`v&��Yz4
?�	����VOf�(C4�1<�nHF��]�:�&�GI�A�Bw��s�T��J�J���
��"�����KIk����3J���`�+��CV�"JU���))R�z�GC-4�1��sA3�BP�Pl�Qg0�iuEt��LR]�"��|PI�����S�8,"�XB�a�.
�������"�}���[4�jd�VySwm ���z�����N�����	�"NE���5���52���dT�B��ud�H�1�h���4��	���J1+�,J��2�FF\���1V�d*�xH�����(H���3�QW�mFLI�����sK���$�o7T��V�i�N��t\���JAE�h����h$r��Pc�C��M	��4�
�4YBm�b4���6f�!m�$1�@)��	H5��L��)�:-2��U�O����vD�#�p�0���)8�$a|�L�O����#�S��T��H%*c��)D�$1#�M�R�H������lT�J��Lw���L�n��&�hxp�Z����F%�N3Y�����"��x8C�m�9���O���^�D(���N��D�����E^��\���+f���&�
,���r��P�Y���WO��7jA'&�
~+JF�oe*9�J5��������G��0���
"F-X���ZH]�i�:�{h���R�2��T�`tI&CQf�`J�!���Q+D��Eji
�ki1),
7}�2k`fb"���<���N��=�BB��� :����,a�|��1-H��\�����Cs���5�L��FI���W[���e�oE�p�+D��U�CD�<oI���<#�l�v�W��x�>e��-��9�`��xN��K��&��(�j�o��!H�J�!E�a����5v�_�����f�j2�
�[����{�\�1��@Na3a�e�v!CK�!��oM�:f�U��,K:ZeR2c�Q8:|�����\�"'Rx[��|�'��G�i��m���{KT�Uy~#��7 �	�v�������"�2����pPeC'L?��
�J�5��--��Q��m�\2�,�c�����L_[�e�v�p
��� ��2�#����tQ
��,����e���eSX*�La���(L _6s'P�&���U��Z��G��|r���#�}+��v�����Z�R����H5��u����~����e��D��K)MVK�/LU[vx�`c���"MUC�/LS$Y�:K�"�RT��SM��0Um����q�m�\]UM�>�Y�����fY�
NNm���d��|������z�LV�\�E�B����B�F ���n��A�k��^�HZ���%����������@�MzT
8~9MU��s��_-�0�s!{��&L��	+���F��2T+���DNSM�X~��X��e�C�	�+�2!�'���K0�Z:@��I��c	������m,��[�IVw���bz���x�����[�K��1}�tU=$��l�d���RL�����R���4
�:���Q��ZK��.HGu����
�@K��.?Gu�:*���Tf�T4�f+3�g@h43XH���g�J)���6&��e��TMR��jw%	w����bJf����s���d`�H����l��z*147��"�����!�Q~f��DhK�voDD	F����%�(��F(�c�W)3�$)�HW��QZ�w_d�p�#��F��r����(B�V]�Q���0nF�E�_����V�Jo��,���H�b�fVT���"���P��a�������4�E���
_���������<���������?�������X�K����5WQ�3��:�d��:t|�9����Ba0wCe�����V�����L��8K�d!7��x�8����~���������J��������d���$��^�������y��;�������gg�NO��O*�=	o����$`�E�X����\�JxwTvx���a8�>��o7rP}+�f���S�ju8U��W:�&����c�o�n�'\����LUy.g��Y�3��	�"���;j��`����'�'��1/�����.��<��~���_p��Q��_��10��4���^{$������(���&J9]ZHlHP��^E��������\��>FXw�1�&E�(�:9��S)1-js�����������h������/{�'���M�o�O���V����Vh�J�������Ei`MY�L.�-f�����R4�"}4Z�c���C�#Q��A�+M�������� M"��e�X�J`
���L���"
Q�{/2�XX�-����jnH���nA��|���`�
\�%���x�7J����x�<�����~���9�[�D�?���Y�������yR���;s�v���:I����0D,�kT��*��~�O}��'L7�����_�e�;���2iQOc�2��3��b?*i�7d����`N���{ |��_�#��p��8������\{��6Ut����8�����H��y��	�M��p2y�.c�fM{�W����)~M�[���S�mms}{����Gh)�j��YT5_(o�����
���T��������F 
U���v�)2��k1��O������7^��E�g��t��=����9�&#$6IZ(P�
��8����o��R�r�L)����M��d�o`�����E3�a���(����e�~������������2��0Q�qh}���:�M,2?�M���g�����kg�os�r��t��������p,��M�RQd����tPbt.�1�,�b-L��{Y�w>�}��04��/��
l^m����3ih0i��-m�pk�_�3�]�4W���z��L�����R��q�9O�4������?h��/G�AJQ�[�,9���\�8�������s����Cp�������Qa�3b�|�_?�vo���Y��%�j4|XVLuXH�����'�3������\��WU��3�.|<������0�}��w���DiD�c
=g�66��C�[V����i}�k��-�L�r]�D����G�������[��y6dz��v�:����O	��X�~�R��ce�����=q�2�e.�&��`�����P����"���h��t(�d�8�Y[pi��7�ye���(Y
;a���d��0T��P1_H5�jN����w����6jB�@��)���7&����S]��4��w�{��)���m���W���N�������0x@�"GI��`�������
�����3��Rh2�������#F��Z������Z~�|IV=y���/��v�7�y�	M���t����5��Z���&����&N�VF�[i���,�<����k���2B�~�9Q�L�o'���+�r���?#�%gn�����;v����
��Z-��]
m���}��[|y����Z��������>�3 ��Y�Z��S�:F�Q����sr���9<���{�ux�~x�������>P��l�kB���tU���^�_��X����{P�/��:���:?<��8�2�4�Z�;Yy����d����#��lz��+izv:��[��wF��\�F�l-8#}��n��I6��w����q����Own��S�����Z�3�j^�M������H�l�zt����b�)��dn�������Z<���������/�^�?oSA|A4����N���������~�	&�+���}�����������|~�>�g�V.�qj��ri3�E��W�D�WO���h:=���m��������%������0���b?��G��HcI�!�f�5�����=�<>{s%>>�z�����;��[�k��o�!`D�Z)Wbh�K�Nw��M.N���J�d�`k�#=0.9%:c�I	#���S�
)-�1%_����7�w�o��z����WC�����/O�����O���bJ�W���3\/�+@����������3&����e�o���DR<�;��.���3��h�����c�,�%���~�x�bq4�B4�����k������������
{"�[e��J!W��8�`��C�R��ygz��
i9a<�-&cP�*���}}���V���0��8���|�������C����|�O�?_����nn����6�t�P�
"��H�/�9����O��1W'W�F��\�Nv�;'5n�O��v�'��{�e�����x�L���F��bJ��s�d0e�5�L�=#�O����wG�^>:;�|x������m�=�1��R��@W�(�1��pP$�D��{� ��M��d��hg/��d���Z0�*����l�e\��q����D�|~{���m���Q�q9�l�JQtUc���&����e�/��������B�>D�rBDo
'ooq�1���xF>����h�Ph_e��**Ql���?=8C����@[`jN�hk�Z�e�Y��m��
Al���>>~��_�?]��i������^ON������=������"���&e�����B@��S����)�b�vs�(B�7h���&3�[Hs9�]��B�Z{z����`}�������l�<{��������������U�
>z��tM����/��st����di�����e.�qr�5y��T������y�����/SSyd�Y�����5�w���;��g�2����IWaE�����;?I����9�d8���XtZ����#ni����Nool��}�=yzn������G��~�~�f�u�.}z��_���N�D$�8�(�a��{	���8<?=9�+�`���@*��!�%��"�����;�o?�W�_v>����>������G�G�����P����N>d�Zrm������)��{;��K���q_��/��e��<�,�(13�3���i�\k[O���g����;�d��m?:����������[��j��-��b��)-����,'dj��[�cQ;��3��
��dn��Qnc���DR��,�N�=���q�����f��?o�����[�������m����t�2P{��������*r���N����}�{O�XH�}(��[��h����4����~�j���|����O'�g_�)�z��u����^*4�����N�|��"s��\{�*�-�2-To����e�k�X�?<x��v�����cb��1���4��t�Bg�	*��r'�h�pSN�����Q�of�]��gY���:?�x������x��ooN6��9q��Hq�������PPY�C7�~�+;hIP]w
�&��R�L����DT�������w�����������N?���G��������G�����F0z���Ia����h���h��Hb��_���Xj���2���e2�g�}��d.��>��<��^������V2Y��.q�3nE�
�1\
��(Y��4���5��Z���S~~����o��3�C��Ip���%.p���Oi/v�"�;������MF[����T�a����p�2�E�U�O�m�?X_9������������_���b����*���U�9u��w�iM�vqP������l��8�+�4�#�i���]����m���`$T�
J
�mL,"������|�����s�p��{'*�&���dU	?��V��A�������.@oN$Im	T�
�36U.aP������i�}s���W��"��9/=�����:r��W�K��v���hZ�k�(�$��K�x���m�p�MK5n��/w6���l�����~sx���7�_=<{����'�?
{��y9v]-��Y,~��=E��f�"���X�PF�n�6v�k ��L��.E�H���i�~2����Ro��ao�������?��b��(4��A&QK�����Z�D�M�W9a Zlk�����M3�4�>�G^������������R��(Y5�������$���jV,�b9Cp��?�?M����������l�����*�0c�U*w�tI����������N?|<�������������2�
}[��)���nnj������w�h�M���R��3��t���B�E��2K��X�o�]����o6��N�v��8y��.�v^>�z�7[�D+� f7i����#��I���7�����������+��dj3������}Ix��I��qu������;�r�����G������=|�o���V'����]c%�76w�WuSQ*R<+��q�2��G)n0�fQ�>H��z8�u����&5����/~{y��������/�%t������r�������:}:	�3mW�5�F|pM��*n����Sj��xpD|V�'�y��������f[]�3���
>K�LV�=�==��L�>���������?~�=��������w"�oY��#Wn3
�������t<�`��Ig����a�����j��X#��Y�@uh�E��T>}�������'/��y~������>������n���;g^)��D
��K�('�j�Q?
�Ia�n�"��Rf���A�������^{z����K���q;��v+�!V�U��������[��<:=�=l�z�U�'��a�����L��w��w�<�����w��:;����v$�C���U!���GFA��b0R���������A�
��F �h�0�t�u��������F���g3�jTqx��hLx������W��{�gt���M�#nQKan��$����~��Y`e�T��c\M*.�Y�:x{ksjH���{�|yu,>S����T��=;�F	�����[U�����/C��6���@����u��Ey^�����i��J����h�[/v���=f^_>�x���\�}����O�}vFv��p+�-���a�#y�E�����Ou��I1G&����0���^����Vm����vE>{{�q�x�Dl��f
�ZE��2��q�6�r�-xqrz�5w���p���#�\��&:C���f.�;������7��?_��{����L��W��~�����B�����N.��	�z�^��������QLT|03X�����A���O�<=�<�c��'�/�V>���N6>���[O��~�4���������Gy�,�O*�4Y34��t?a���V���s����b�f�nx��?9���������~��'������]l~�r���g��� }�!��*�5�(����zE�,�����zL�'��L�CF�U�#@5�>+t������m~/�����V$�&A-v�e����0�����A��q�e�{���[����P�n
1�Vv�Y�9�������o��=z���?������o�\��������N����Y�e�r7�Y�[�m��U�8�:�<t��\�(����T�����^d���-%e��Z�o�9�����?�x|u�i��-"��n|Y��O��������/&�_��M&�����Y����}��s
�����pu��4(�e��Z	#���@�L�c���ga�(i��5t������������Z�������p��[:}��]�����StM�W%l���V���9p���s$'�e���1�eGfF�&fY�������g[w����h����z$�6���&���9���r�	�y���X��D@�l�*����O���(&���i!���e8|�|~�����o�z�����G�n��{�������o�Sr����5-N@�C����*}���Q��~��� �2�f����Z�������~�pwx��������/o�~}u��������[I[��Y�4H:���R.N�������f�m�"9y:#-!n��B�`���W����?l>�|��?}s��p%>V�VL�al�
��������dmT�V�$���}�	�9�o�q%  0���l�w�7���g����'�?��G���=�<���)y���p����5�@K���-�Y~qW��B
S����Au)����<��f��PT��G�ooi�
B����|�����)��|zwe�c<���2fV%.i���r�6
�rc>����:���q�I���R�}�i!{�������&6�b���P������+���~��nA�_��8(�F)d������<:��������������OF���J���x�,]hkW�_=z�������{������7���� �P?^5>���8:��4'�d���s�Vo"�����a���������y���T�i�%�*���QwlW�}�3&��
�-7D����w�i��E�i������B�g(��}VV�[rJW��5�>�O��N�����
�6���8��r�����J�`0�*�2�\8G>Pd��
M��W��*;@�n�����7�M���
J4�}y����������O[o6�\�]����g����F��w+���;/����������vI3����p��3������<SF6�#������_��?\���K�)iWa�Xs\FTm����T�'�N�B���R���-����7�ZI����i�����i�u7&����������>��?Q��w�DJ�y�
Rm��TNX�4�.�(�VC�p�����Pl���*��0Z��4��v7L@�0f��"��W�������������]^��N�����#1���3[���RP�e�n��0��D9 �F��4�H��s���8x�B������g��M�l�Y��%���Hu`����?�n��� ������8re�g��:���u�/�;7��`���/
4��^h���L�����a������PWWJ�O�����E��^R1�������g�[����!#cF�J���2'�?���/�Sv���Y8����X���7o�>>]F��DM�Ns����D�	���D�n��j����`�d�2���n��P}���x���������w�����.'��R�����r�����-1��y���������$��Q��s]�SJN�Y��3x�Q������>?|��o6�'K`���w�������������
Dc0��&��I
�
���2���gk���������G(V'�Bb�����e����������Z������u�w�(
��@9�L�q����f���Lj��m��DBP���GN�O��;�;��`y���0����-��I��WZ�����J'LU/p��������H���N$mq	RX�n�]E��#wa���N����tpL��=~�u�qp�[+�����Z�;N�jP���x����f# �_Q{����<���MF9}1yX(���0���'��6���n>�z�Zw��������og�����$�Hg�r<�F��J���ca���
��aJa�Hk\�O�k���4������Wv�lo�@l_��i��k����q����k$T��:Sn�K�Q�Gr.��!>�)�15�P�l��c;-`�C�Q�����:{���vv�u~���?.�<�+@���H����m�s�J}N���<�*��������������������-}���yC�������f7[#�
�0Z3L�HI��~��s���
P���}$N�B%����`T������o�y��wV�����f}�����{����H��08���������9��p^����U[[��Kq9Ap�#����;_{�]r��~��q��m�U��O������	P"�Vy� L� kL*�;��L�Q������e��v��F���~��������qr�v�;kHr����W��TE|���..;���@��5X:�50�3L�6�����23i�����j�Z�a�P������}�<^�l7z�'�����u�q�*z#MQw����)O�~y����L�L��	JK�	����	48&#����_t�����_�~}���}(>���;�16��K1HAt�r���LFE~�^���Y��G��vTX���?�|�;g[W��E �����3��.���>��p;��,������/���.���l�x��t�.������O����G����[�n��(�
��r�v�"�r����D����o�V&�'�h����^�?6t�3
��J�Q�����%s}��o.H{ay���6�j�o�;�{����?������N5�$Z.E,��g�e4����jS�LR�_y�����5�^-�	2x�����|�)rwg[�����e��jIU�6"�^?��
��z�z��� ��x���:.I+U��K2����T�Iq�������a�H��wK�.�����^��4a��w��Ax��N�v�����r}=/�(���S�.�d��qD��X��T�5�0mt����i���L�n>�����Zc�t������z�zy�.�
?��z���)d)�I���n%���"dE�UV���0��9��N����hg������y���=�v"����
�v�G�O�09LRL��;\�=���{`�V(���	��o2��tT"��<�~W_3�S�
���Av<}�:�.{�O<QVja�JLE�V���oiD�����W�^un�Z���<���2�!�'5���s�?��%��~1��W�!���W`�k�^d�oQQ���������u���������e����������%���U^�>2�.���f����o-$8H��%��g�I�`�i���>_�2�����s��vzh�w�t��n!�����O���h�'�{���������*�p��G��B�5�P��W����>��`e�`ya��\��3�������������F�{�Z�����L��M�����iE.J��,��]�1XZ�4�JXyT%��8�7d��z���o��O�?R�N��������\�G�9�'8�4�TXk%�����������\�LG�M{#H��&;�	��j���?�������,<��#�6�����F����g�'�vs
n��7�C���,�v��� ��$���s���X�r��������(u�O�<.�5��fk�A{{���6X~)������K��,7V�P�2�.!���mdh�D�=^����:�e����� 2+k���H�Q<^���|_'�?ha���3��aY���Q[q�\��~$��0��|�v.h�BA[������}�.����n�y��W�������^����[�������nY�q)�b�����{���x�����7�Ek��������?��~U��T���T��r>���(�Qd�f]N���/�qZ����!G�!��_�.�h�s<0�O�������e�C|��
�[��%������+�������?����e$t=G�;b���T�\��F4�#����c���3I�VH��
X��R��Ce��R�ep�Sh��h�}{�l7��F+��fe��x�������7&��|�y��TU���t���s"�N�4=�f�q���u��E����"�T��=�\r���^���Hm�	��m�o�S�����@�����=&���%z�%��K|�gv�/����^|��
�b6S��/�8k���L���
j9n�Gu��v����%F�	X!(�B;�q��22�3�V��o^H�|C�}�8�(�t*
���z�|x�(�|�Me0����T�����II��!��O\�q=#� ���U�}�����������B3��
��t�}��k
@�
����f��}~=���h�@Wv��>.�VB�rZ��J+`�c�a<�p�c��
'0�8�n5�:R9upn���Ww^Bu|�QF��E��Ho�rg����J47�C�^��>�!���{^R�8��|�2����r��D�X�qsF�
%"����1�����*��r�sf���E�x:�����P�����&���.}*yw%���Y��
a!x��������C�m�����VF
���:��f�����/��//
Alx��P�&�J:��h���9��&6���\t�����_QX���#�F���Y�Gj*��`o!�}���#����X��,0Y��e@�r(���n�	
S�mt���2O�������)Xb�FzUy	p�e(�R���Y"or�<
�!�.���i3�����8�a���������c��jA�
�AJ���
g�<�p0���=2O�R
���
�b&T2�P��1��!���}��2+����������d,d�RKR��+e��l�����?�}qM_\������������
j���Z�3P+(���W������Q!{u�O��r!�JS	��q>����V�h^(KyH��:�D1-�V��y%�RB1�&��� ����^�C���'�j��
��c*D��(4��B��PLLP<�0p������,Y4h�4��1�_�t�~,����*h�K�P�b.e0@�2Q�r�@�{|��b�d��1;��x]l�	�����YP��j��!�+��&3bw�������#xF�gy��FB���)q�Z@Ds)��3�b���2p��0�Z@Dv���3��;Ac�<�B;�P��g@y_J���7	cL�m'D�re3a�<�3x������o�������F��%���G��w�l�Y��K)s�����W��5�H�y��$h���+��a�n�����������s����<-,�\���Z9��y\f�r0N$
���@�y�r���RDW�e���)�*S�04��y����P��R��i��
����B�)��g%L$��:���|+��6�����I�g�a��^��]!�P"(�+
LK0��1B��-�oo��A��g5��c��/�C����8`�.O�*H�q����L��R�P������'������������k�z��������7���u���g�*��.�t��K�F����zB�']�'�
��s��0�8�`���S�H��-5fd�Y����`��x7Ck9#��[�fxl9�������E����q&��RM�:�����b]���>1M��������:�U���q2�������e���-�j���yn�TtNp=O��kd&T]z8�i ���D���.p:�4����~�$����UW��L�H��U�hN&^��fG��Y�����/Kzo��������LL�$@C����Y�3���:Y�������2�a�����$�D�/_�[q�;��Y�NY��2x?�I����EdE�9�[��t��"</5��z��9����~�c�bB�V�@���:������}y��e��}~?��
��1�V�(�����������������������n�����u��f���w��N��n�&���~h$�H�ib�����������=f��n�!�4�v���A�jd�]��7���-�����UN�5.A�y#C���)nN"���0r1��bP"���������$an2���q�`�J0�_����D��N���nm��D����yl��IM]����vz��`�����.+�������>]<������W�S|)�E���f��N�T��T�Dg#�x-��|�m��e�T���!�U]�������|JmF��|y�&S���.^\>'�2:�l���H�D(a��W���B���,dn�BtUv��lk�?I�tk�Q��3��U�D��D0�N&*=5&�8L.M�[KnD�9��r��'4��EW���oJY'��I�9�0+}*��"7\h��b_=�tF$��|poV��	��p���J�����D�xiX�$�e^�������ft���gH��O��G������)���"�8�)�x�	L���?���/��������I�z	7!���B����%�H��������/RR�au�@Q�Jb�?�Hy}�UJ�M��6�e����y�q�;t!�0���O~�$	���`l19%��8��IS�r2is������y��I
���p�W����~�Zu>����V�9_�,B{��?�K��[���'x�
S���\q<� ^:Fg�_�	(|������!��1Oa~�����%��/�~�.�^��sG����I���x��	�O�Z�����}�w<�_�f	I���	I�Ci�?*S?� �_����R���J��L��������{�o�������|x~����Z9���]Dn����JJ�!,��jH�%�L�����R�$�?>��w~�w�������vd2�f��qekL9���j�!&�Z�l#��yB���t"���^���as���G�7�i�%�ai8���3�,������Z.�����3��ty���]����/��>�l�ltnN{��`m�7������D�������$o���L��b��LV��Kg���	�dB����<,1�����7�E��PKG���5#��������
�v����o��h1�U�?�BVS�q�x�cB��m��<a3����G��dK�����-�i���y���Y�7���jp���t��������R~*�k~j��].��wE_�h{�}q�p���p���"U�!#(h�65CsUD��Ow���*G���u��V��$������������I����+�Zg��C�����;��W����~yw��?Kv���{�#�hMjV��%�N9�j�uQRp����&��DrPO�
�J�|F���/�77�������Z��SV���E
Z�>��aP$o��������{��njD�*Q�
� �>�2��R�:(���������d����!�0����������U��)V�$]�?�������He�cK"�:6]�Q�d��P��+K���brS�����L?6�9A�����wv�~u�`v���������n����q]�v7_�����	t
���;�<7��Y�!�!��r�|~����453��75'F�4�wH ��nl��?{��E�d��+��F,��+�~l��``�00<66<��~
6��e��fJ�*��T��j�g�
"(���R��QJ�:��xk�qg��{���9������WO�ztx�s���U&7��7��%L�*`V��r���-9�sX���w�G�l7MD��Z�l�,)���QEd����4�p��������7'�l�z��gi��r����������27
���.�y������x��������������^+�&��tU���s�0�%E�^f�
����{�wnw���v��_�:�rz��7L�W9��2�BIh�G~|�Q
�����zYM�`���n
L,9����B�/��"`�e-�,Tjb��f�;��^����;�~z��j�>>�<���g~U��*��V�Mx��o�W������8�*dI��o$5G�?�}|��
��:�"1AHY�	t'��e�w�<�������6~��������Go�n��i������=l5T��&2���K�N�����{��k��Vy���-����DJQ�h�.Ik_3N��{�����~�R�'�����;���{uJN���������`^0����li�]���T��0O���w�C7U����%����X��!�x��K�K����O_���w����������Ror
��p��z���}z����iW5��'���O�t��%�u�f9�FW�
��65��en.��o��|����������o�7<y~pp�����{�/��?+��1��U�\�����)N�M���T�@��������p���v^�o����#vv���;��gwl?~���YN��&� �u�\���BOy��nAj������Y6�������#�B:��i9�qV~����']����������|;Q/>|�i���_�lm�������D�@���������k=?=�r����n�k~Jt
��hZ%U%��$,E	���<fY�r����&�<�����cv�����7u����u{�@�g�m�F
	p�J���iWUn�G��Uv���4���K�nSI�J��7D2�r`��z���}���}�p������^�����m{��%�?<Z�=P��o
h��&�v�����3��f��7Y����J��"��/l�g��a���Z���:��e���{��B{��\�����l/�|.��,`a�d������5p���4Y�`�����f�.��������x����������b���2&h�,(^g����,�LJ�aa���#����O>^�#cA7=���_H�GD�e��^*��'-
�6}��u�Z�l	��~�/�if�����f����	��}9~s�C�_!������j�"����xL�I�4�@2\G��Y�o�5���b{��p(qo\YP^�P�&��{#���f�O��������QwJg���'��a���
A2Gw|O�����;��QXa�p�����������$��cBN���a��99$Q��
�,������q�KcH�.3��	���c���;v�~��f���D�8
��#-���F�C,�B��)��U����������TAN+�/N��0�u�VAX����I���BP�d���v26�G���V�>�6��T��5�c\@F	"��+L�r{CF������Q�3�`]�15����2
0vi!)u�]�B�Zg���K�;L�{7D�:3B�J��@����?s��$a%V9"
������#�tR X?d�Qj���<��\1��;�����n\������^��k��J&�uBiJ'F5����t�`��JU���JHC�D��5�`��n($64D�0�5���#�!b��N\t9��?� �fP-��D��5��/�Ik�h�-����G;I�7�|M	,Gt�.����	X�K��=}��������$�u������3����Tr-��������[L8���1v�R��H-�y�9�,�	��ra��y���\jy4��#kL6�/����{S�\����0�~'�Z.��*�Q�R��1�GfFL��x?K����c�*Da��z�J^X�^g7E�������]��Q_��is���.��!~����W}q�P��e�)��)i�zw-���
�zQ]������~s%n>�c"o�WS�<�@W;�u�r[K[CuH	�����V�d-+�R�^vLa�I2&�D�9��t2^T<��y�v#��a���Zoh�7bmC�dv��jR��)����0�!,`2l�~K����= &��@GR��o�K���c��Sb
#C��/�7}i���bq���� ��_*�����/}�d�/���w}���}?�e ������4���w'x�TN��j�@�_M�8_R����=]�[��D����B����n����{�r�%�`����n2]b���^(���b���s^QB��9�6�^Du�L��r4Y�����t��S2u������n�������HB0���N��g���7�}/k��o�{�����<������2�
}�_<+_�������[�_?��}���K����?���R�@P�)$�T���V�)(���7����?;���\B�W�E�}F��Jj�%5'���/����.�N2W��=�l��/��W/��;\��������1od���jZ��pKU�q�,\�E��y�`��L_@�����^����>��\|������*Y�T���v�M����?l}�7��b�[y��/�R���������}���o��?�y���hLPZ�C�o�}L5��*�P5�c����^
��	U�z���|���h�f�hJ����E����t��%����_X�Ve�6����j)��.W�bH��5��Jow ��w,�0�G����YM�������0�~�oU,��x�K�.����v��)�s�B����\�����e��e�f���0�V�� k@=Ma/��6����0�k-�l-�elH����N�zD��8�����M������Md�
�-8)��q��ct3����s�*j����3mF%��Z��������W��������H2P%��$�^��:�����	jXoHB&,��D�J*�T/)Up�[�������2�H<f��#���a&H����J���pKz�h���T����v�b�)N�sd�jK�X4*>!�j�����M/W���e��]�jk��h�Mb7���~h>4��1�Ls���A	w�����7��Y�F���4������]�;RF��7:�K��������X���!T�Zn1d}u�� (�M��f�@A��QI'�^��Q���I\v:���1�2Q��11��=�t}��������_���{�o�>E U���q�C�2
�$����Z�����������n|��������#"�v7M�����n,������?���Ol&�"J"#���;�/�c�����2Hi�V
Is:����c��dep3��p��
�`&yPA�.�``�
0!6�G���G����dD���5����g���#�?*�4����s�e^�r���.����?0�P�3�����������o���?e'��CGD�_�.����
����q�.��p}���r;�W���U�Z9g��8����A=E��k��U�R_.��#L�
�����"/�b�Z���k����gU�����L��Y���j�"���<?NKM%�
{�����#O���d0B��2�<��1������DW0@�j��d�P�{����?�@/G�����3T����y�����U_j���t�|I�I�k{~\��H�R�A���L����:�p�/�A����zd�!�%!(���C���������&�y~:�����8_e�?9X�����k��Ur�T��8������F�gL��?R_�.��@�+]�����\{��Li����.\��5����`�f��3b���v��d���,?>}Xd�n?��x�������n�=/N�D3��'X!,�|�|��6����������E�����?�X���9>�.�=���������h���N�������"�"����9��b������
�G����V����Rt��Ug;�y���r��`��=�J
/��c4Y9|�Rlw�	0L���G�@�r����{�K��]�?&��n�������-���V��ok8*7~�������cp�M2�+u��l��g{R��vE4m� �LqRC�E	3�g����d	���5���Lj��k,f�tJ���TI��"����z5�X��t�hd6s2;��*ZP��
���"'�����Sn�����~�p�+gs�X�
��yG�+��/%'��I�������
�4���3<O��y��`����H�$'��#eJ��_D\�yOd��`� H�^��
�[�7]�-y���
4c�	a�y��-�n�M���[Lq>&��1f�<q=�5���Cc9 ��K#U��+�����5aL"wi[%�����`�iiF2��	s���C��rrsY�����#�k��D�!
���'| �6T�$����"������!S;�%���(9�H!��Q��+D���%_f��7�(AI��;�/�.a�a�� /&��1 ,�`c��!4�;O�=B�0
+?��w���Q��^<a�������P_��f,�Bbz"9yD�	�E�Gn\�i�@)��WOk<��m�m�Pw4�����&G\:F��M��3X56���P���������:�$�B��T^�}��3���-�4H*�u��E�������g�z�GHa����b������N;����w^/n�����������T�����[m���a��gZ�K�y��/��E�����4���%U���E��~����d�q�#���|E�<70��Y���U��e��Q���DR�',���L�����	�OJ�)�*ek���KiLa�9=y����SKG�Nv�x�Z�y��SV��L�}2n�j��z�����kS�T�7�PYa(,�����$N|�L���n����}>�Hk�E9T�����v��z��6m�����:��6m
H��[��9�c����f7	f(��IX�Am��vx��V7�o���\#�7�.�Dn�������.�c�'��C�0��k����&kl��������nUYk��[��$�-�;���O�a\Ig�c�*;�;<���3"�:c�N��I�������B�"N�X���5�k��X��XB�Y#t��b�V��aJ�>�K\�]F
s�?9h��e�U#�	�DsQ`���:x�HeN63<�A�h�`����{�t�5�Mxz|��xW��Q���0W����U���8)%��������������`����l��,%���R���8v�5������
&��Ru	W=�B�ov�/��m.�w�=�j< ��.N�6w�{g���"����2���`=�C�u	������%�9���
�����.W��PL>:�4J����K�)_���&&OG�4@-��t����K6e�@'�U�eB�|����0Ge!L8pi�f������2�A�5��\�����^YKj]�;��i�g�%#���d0��y�8�D<�������(�e�g^����eD�XA]�NPE,t[�O�(`z q���K���,��@�WP��e:/����c�w���g�s�X���'�9��� O	��,]h���'�M�4���V		��
	�Z&!!*����[u�{T�U���F����	9�-j���Ne2��2�3g�Z���x�[R�Dh`��H�0��6��5��B$� $�:39���
������a���k�t�'5��)5��|��rN�����*(:�/|Y �Km�������s�
<��k��%c�/}��D=�dy�W�WR�~�yM8�4���rC���"�,s�C|�H�A9iH4kXU�
�!��
\�Q��r����4��	�`�uM�&��=���A�����h�����:�82
�A7-&G�wA�`�
i�'�/�g����K�j"��u����e���U@bj� �{�^J1	]8�[��������E������q����U�q*c|^=`�k&xZQ��|?�*�2��bsO��-�"4�*6��h�f&�Z��I��QH�O��|�����V���j��&��=��f������)�B\��e1;�(@nn��F�����=����F�;��S��ms�x~��4���<������L�����raO���5��f�y�L|�P�V�u��/}��� h���+B{���a
[�'M��yL���>
������4�6��f�����/�k ��X����|�
Y�'
������j��o N��u��s*���
G��5%8���-��L�z>�@s����f���7Y�kf_bx�]�n9.��.U��1l�E�H��������������g�BS\�2+���jz��TFt�.FD]����`�Vhl����cc-S�fKo��k,#D�)�YBY����EKs������hl������TA����u���t3a��Y�Dd���{�A��l�)��
�1Tc&
�WC�|c���I��R5����g9Y���P��<�Wz��5�_�����d���WX�W���G�~H�\	�$@^|�2`��`�!���[�==�G�������0�=]]���z�&�`�����@��R���H�V�t��q�C��k����z����Y,���<K8@/����	_��������x��������8�r��4�C����"�T�]�!1�i�),�0nZul�P�IC�`���T3���[%$65��s)���x/,��T��*�����E���t��)�!�j,*�'H�e7�1e/xRXD}�	�x�.���]���2����kw��[X^�)�pcp{|>�*��>Lg��f���w�Uf���&����%�{����Br����:%�LeN
��{%`��]sNW@;a�v�]>�NcN���^Fl'�VUe�
<0�5\>	^
��V�gB���L|Ev�$x�3���0�����������)����P�~,X�%��k[#�P��!E�F���~<	��co<��"�=^T�KC���SRI��<��|b}�i��p��n���B��ao�^�5�������)F�m�K�{:�����i����[�-q�vb�Q��T0��p:�g���"�:�$�4�#N�c3nX���"�:�$�0�E������^n��X�h���X����mJ���n�	�Xd��&'�r�
@x�N~�Er���A@biD|��<�0e��ez�(���Bm���P�#Fh����zi��Q��������C�~D�s����1�����|�t�S0KYY
rl�����&e�����2_V<���^<z-;v$�4�$	W�T�Z�Z�����JsA��U�=���;9}&a2�{��"��2yw3� t1Q�$��J��U�IT	��I��Jbu�D�+V"����Itx$e�X�e
���x���*[�
���&0�s!��c�)x0������,t�xt!0dN�LOw���)���+M�������Dbgz�����*,3�5�;'R�oT=*
���@�c�X'��A�he��]x���|���|�0�k=jE�X�]^-�}�M��'��p�`��d1�:�8uR��>���	�&�������������30	�>��g�<C�}!5n�g�
/��T�]��G�K�D^-��������"�e W-#�[�n�l�{~�����g8�*�xN��Q���+�!��,���/�"�x$F�G{��z�������~��T�.���3�pu�?�	GT�'!�RR0��!�
i���q�����f�IV��%���������7��L�|���7��
#�b*��J?wM���Ff-��Bm������n������rw��'��&X$v=5&������JJ�������^	����������������77���{"�����>�������}�4�?��������^h��=MzG��I�dx=��n��Y�z<8�&����aI��D�Oo�g7x5����9w�R��|Ep���Z��1�����q!^����o�a�8����^n�~c�������[q�������������Z��;>����������s�����jt<��������"��!��]�M��K0�qO�N~�����<v��n���nt|O�n^�sIo:��������n�����gK�&�u ���Zi�����k�����������w����V/�N���z��9�Z}��[��������'�A |�����o�c���=��je4��N��NG�p���hZyvS[*�y�1��5q��fw�e��+<7:�����Lw�[��������������������)R�6}�9��z���3$�"G�@�|Fy�:�in%)�A��Za����+?K���>�ll����l�����|l����������?Ww77V��������g����X5}�h_j��r��|w3��M��c,�6��(�x�u��CQ�w�����Kt'������y`��O�]\�P~{|E��R��N^�a���S�<l�l�a�LGc�,���p4�s;�?�b�m(0@=:a�%�K�nj������#�S����m�}�����������
�z$6��%]�����l���G��[�[Y���s� E�50�I�h8��zbsS��Zf��M��1$�|��,8��Z>���.�8�������|��~����t��@��������h�7M�>e�Ky�vH�g[</#�y$|^��fp�Z�q����ET��J�D'�6�w�v��w����&�kW������/�l�Ow�GW�G��S
�H :����P�g�H���^����a`�+)B=G0����njj���I�x�`0a9�����:�[}uzt�q��������w��=<��r�����"|���7�Ei�3c��lw�1���x{�hx:�������J��*s�����0�����J�T����;��q�V�w�&G�&�+���{��S�/���rt��m�=��t��rP��c�k��<1��v���L�������P�
V�pS��
���P���/���[z������+�����|��������G�@���o�K�]y������P�No�r2
�n�)�#G7��IE8�u:EA!�MM�#���Yc0�y>���+����~o�Rl������������_�+�j�Q��S���L�����L�@LO��*���O�
�7,Z���D�������8��_9x}/��{����RN����?O��=z9��6^��z�@UG�r�S3m�anFx�3�0@
������e�nj�����^�`Cn�/��V7��A����c����g��|m�����f�Q��CNa��t>�X���3�'ET����78C��>h�!o�d"�u@3	FN�����Y�65�B�2�������u����������mvK��te{��`8|������q�a���u
4w����� N��	��=f��iS�0L*jS�Me(����{m
K�&������Rg�Qrxt(v����������������
;�/�����F��8m*`��M3�
��d8���)` �e��&�6Y�i7h�	,�R�����w�m��wr�[�t�����p6zs������������`���L��e�u���8��_^�NnA�M'���*U����\�"�*����d7���`�Ljq�c�	�W]����*��zdt�v��du�]����P��?�P�GCI����4*"�A+�Mg_DU���1��\)"O���������[&�������l.5���o���X{�n���oW�_|��/�����������������N��
�&S8Q.m�`b��g���E����3��4B�Z���:}���?��)���(�����������#*�:O����������2i)��J�/�fy�LR��$�1'I��_�������H8������)������������F�o���g���y
�p�����(n-W�#T�<��-�y5��������-���M(
�P*M
@E�`�|��Tq�x%�]P,���\�U�]3��",aq�������,�b�����F�=5�V��%B�i��� ���<[	�d%��JJ#����G�#�r�yM��F�$��t#�P�u��nh�Q%9�-��j��W��]I������p��^_���&�T}]M�A�������V�?�Fb2�U�`��~���-|�-�h�,����b����j�s����7B���fM��QWD`�Vzq�����p5c$�*-��e���Q�8�o)����G��7�De�
a����b��0���[�6��w[��
`���ahN�4_�7����v�����F��j-`�tz	u2Aeo��^����Q%�����E�a�����J�I����F��^�7�$!�8���`��G6+�!�Ni�[Y�{�bJ�����/��������kT����M;��@2d�:�Y%!BU�&B��
������N\'3Q]�g�����4��Q�,2�6k)l�
G7r���G���:A�����������&$���B��EH��n����`��.�XD0=����y�rw{���uuY��K����G=��%��~�
��\���)���fB�
��"G�����8�K�t�$k���&"�B.t[j���V�VZQQo��
UE7r;h����zq�U���P|L8/�v��M�hT�4#�Z���T2KB�0�:�����2�_\��R����2&�[)96

54-����Q,sF��3���wI�Qn�������O����|�1p{���m�i�9]BA�bL�P4G���<�E����[r����������t&���E�f�����k�RR���Hg }���N�!�0�W<��
�����b����RmD=��>�����}��������g9Wu�n+F��Z���W����g(�'4��/<��_������e��Q��q������(�B�R�&y\��X�9����:W+�m�=����jl������� L;
W�=�h����J���� �*��t<���m��m���j|f^��+0��L��O�E���K�B��G=����}V1�U�_U�V�|U�=U�Z��I,�u0R=�Z��\S��U����b{�5���s�eh��l���� �����Y���(�a�c�~qy�r�9���LE�7�B�t[���(������~�5���v+3�N� 9#;�
�0�*�F��b>��Y:�l
{�z���a1�$�$���_;�-�����/��;��	M���_Rs)���"�?�_O1�p�|<Y�����K��`����&����(���,[��	g*QX� \����Stf4���Vv���o���
V�����4=]�r��n���U��
k�.�^?����9~��
J����bo8J�197	��gs�{Y^zL�6<�x4,��^.���'EB78�M>�
}���w`/H~�=��}���
��DcEW���
���4g�B�z�����R�?g.�'Ey������ZVu�vN13���)�����%��C�6�*�,E'�����J�0�z��X�)�����C�[��A���U>����������_�������=\�^�un������^��b�E~����������4T�����*���2V��u���.��-\_b���F�������d��1��5�V��P���\q��� B�-�0�se�Y����=�7[���x������*=��S�%u�5��&�j@
e��U�!�q������	�K��u�s�s~���GpYj�z�5p����#�����+Bm��9�
E���x�����)��G
�2v�T`�.1�>t�������dMK�pX'��5���iGKi]-W��D����*~�f)�l��A��x�:������L������b!KNR��/�lP��N�t/��	�(��*	(��i>.�_f��"�}-���1����=]��{x	�����p�����vt{���(��:�2��n��+�Qt��n+�o����u�>�F�kS��rrR��EKc�e�V��*v+�=�O���������W
�4�����x����o���	��!�
��`i�dd�H��ZAY;�_�o�P1�W`�t�ZKf�N�����a��v���$�Jd6]���jyI;IP�o�~)S~yu"��7T�9��|%
�8��\�:fz����J�b��CP��
=;�m��c�-t��<i�����~����N�4�����<�������Hl��c�����K����++�21����g0��n��SB�����`������l�D����&�#���{�V�m*=�^���)/�#�&Y�i�9��?j�
�*���f���u/���P8�@����T���v�e���-kU���3�J�U�4�E��5���Co_r�1!�������;��
��iw��b���Q�w��|���lZ���Nu	���/�u��+�#X�����x���&l�YO�������c����p�R>�q�����uG�����,��%w��T.�������c7����*���x�������<��VYJ�x'�k�d�Ws���
����?���1s?pZ����/cm����;�s~��d�(��Pw�����_g�U.�ht�=��Gzb����g�eJ1)��<�F�e��1��L�#��P�LL/�/Rk��\�QxI%�(:5?v#����\�T&�*m���M�������I/�5F��z��JdA=V�����m)�	���[|F�H����Hv�h���`��v:����t��%qd�lr.u�j�%a�9�����d\�+*h�]�YN%�[�Z�����]�C�yk�e6�X�B>�(�DVk�#<���Lh�0f\$�6|������n��B��?��y����YSp���5����V�s^Gd�Bs{����P��!?�R"Y"!�5D���'�%Q��1�m?�=n����uv��(2��"��{�P��+��B'HN�{��G������S���Zw��B�y�cj-@�I���D	E������C������B���M�YY�!~.���\���xY-�bF�� ���x�S"H�{�3#|�������L��XE����G����Tb�['�8�Y�6�DGO�z*�c��E�K?&���kc
:�9��f�a<��R,*��qw=:���k����zY�t8��Gb��q�#���3��y����,%��A�ZFK`mG�<����f�P6�2i��)$\P3w~@l�99f�Q�P1���H�Mw�H�"��F�|��
����Q��,2t�5-Yf52�L��5���Q�v7L��������wK�&�#e"�bl���Ue��Xw5z)�i���f�c���7��f*���s���+��[2���@Y�'@j��9C/�Nc������������F\[h` �"W
�'���)I���-1�{��b�x�8�������x����\�L�o(|���>�]�6#Vpe@Q�)�P1�t���MSVp�����
���PTG�\/`z,%+c6�T���-��B�s��o��-�O�f��B�su��m4�{��ir���.������'�12���c�q`������o#<�*.}���+`v��"C�`�S�[��������-�����W~���h��
���)q��|��L�)�0��2��2D�����;(�����_��?t�5�/��_f����k=��I8����cR���M&-
��e�T���	@�����?Z�������DI�R�G!����~����������?r0�Z����(�1H�������p����9�BwV��������.��)��=��p��E���I���nr���_J�om��H����Z��K8�2OWL���)*_�w��U[�����N�MY9q��|a�xe}���43����[�]�����C�K�D�xq�'��f��t���$����h�zS����#�A����u�~M�eB����qT!����L�C�����@,���{�_/g�U�|���Ub�e3��-��HJ&-L4~Xf��b�T��uJI�r�qc���D)Q+-POJD�e�Q������|B�����4;�c�uq���xy?���'d���8	���Ojc�2��I��������N���~�r��_�
��\�j�jYW�V���:^���7CE���Z6j�Y����N�y��4[�j���?*
�I��fV��~C��i���b�fw�+��f���:3�2�2W�1��o�$�[�����1[���������s������*��#�m�����t�a���Z��	7�p�\����f�'`�������Y�^D�;�{��v�K ���k�fD�Ye#9.�n�oE�uZ1��O�PZy���]'_=��X��haA.�3��0�o�H�`Y��oW�l���tVM����6����o#�C_� �K�4��]u������.�X�`�.p5G�fB���z�������(�E4v����A\'V�����g��	*��!���,��T�>c�J����M��_5��m1���2(��h�u-��3�M�e��N���9-����@�n�1��2�un��PP���X$��
ihG@F��!)tc��RD@��\9�`�fp4(���r"ue���v��~`6t	GP���B'Km�+IGwj��e��:�HX�4Z�-6{jysj���2�8ZZHnS�EFK���!i�7��S����<mf��&��"fs��,J�R�Q���X��,9"q��N�wW��sf��*����Y�U�:��������������!t�\�A�o����10�����	nq����7W������p�;n��80OYn$�J_b�� mm�m��h	u21:��HR�AD^|�com�m�v�����mCc~�If�8@�[����\��c2c���C �wo�IB�%
S���	��<E@:�y<���`������m�ISdf`�!����Y&�D�w*R�ia]�Om`��[^J�\�]�)DdJS����l����a�,8���m������ F|��(m��9�$����"Gw��X�avAJ��
5WR��x����Bu���+*1b�-f����0�Q��IM�\��pU�.\���lI�QCl<�_x�K����B����#	-�����]`,i��17�����4g��l��2C^�}�h�E!���0����K/ea/�\
�����eBN����50\V�Q5��B��n�8�x*ahsDE�zpZ��*�����*,Uh���9�n��2((�����%j����-��0���������X��RF2bT
}d����?����������1K��0�M�L��J�v�5)WG����V%����+c-�j���
(g*bi���X*��w#��� ��<e��%{�iY�����No�VAYD}R	�|�x�P@�$���N�N}�U�\<��mQ[�G
U����ZZ=������q.Om�q�c��q�"�,4i`kde`�g�BwxoWA��C]���e(S�����H[�+�����!����h�o���,P"tL�@
q}��]]�|��>,b(rw���./�D��#���oW��9�\d�U(��h�J����cX9�"<����r�q�F_�P��L�f�W��53��>�����g�Hx�����53h��UP�h!�U�Rh�M#`v�<3*f�h� 7U.'9��`v��&�[�U��o�@��
.� ���3q�)���$F�p�V���(�|2�OT!�cPW�
K����EW��*# d7I�X�*8�4A�����V��J=v$
I���K��pU��� ������$mMB��*
JX_^�&+GC�n��4���/�q�G��fJ!#,e���;�]��B��3�����,�+N��W��ho�,��}�u�����|Z>w~���c���b��xZ~�l
[ozj�B��V��U���������L~�����w��
�wK�^WT�\.�������2o�{���R���>a^��.���+}���u�.$�g(jg�KB*}L
�VL1KT�Q��)��!d��c��}x�Y3o-�N���tz�Jh��+��p�D2I������O�/�1N�`���� �����r�*R�F��\�R_�1���'������e��x�>���(3���~�3��8��6X����!����t1;�vH��y5���?#M�=d���zh1�"
�z5�UDu���H��b��+c�T$�a�jH���L��\lwlH�+�����3�J3�l(	>Wmp*5��>�;���g�X�!���t��l��/�f��lx#w8���	���dIh������pb��f��!�U=�e�������o�����RjWc;WA��U�2�[V_L�!<�X�*H���R��I����#r5��G]d9�
��$��G�\��x}����I��z��co�n�/V&CR8>�������OPFr��� �R|�pou���L�����jp��7B�T�b�Ri�T�,[���Sl��7[mA
m������/�d��F�x��|~����d��'}v)���a������p������l���44*4��(j��m����h���L�������{����f�?����k|^�qf���i��,x_��;/b@�3y;�efjS�S�h$�4S�E�9�q�1�B�����n��������fS����G�x��+4g�������g�����������/�..���
)Tk����E���������9�������r���G'����}�R�H����"�q����\H3F����������Ky���[���N~;��1���B����H�X�A���Kc1�{�vnN�\��JK�^�jw��-������97�%�ww�T�y/B:�A�^qNK�3E��9�9����*�r��\�|�R
���MwyF�D�����[�!8��-_@�#]+�|�Kg\\H��� b�������#���c��4�J��`4�
e����kG����_2R�2��n����.�
f�X<@c���]	)K]R$i��]��V8;�k�qE����H��jFh�beJZ����\��]���E3�)Z���w�\H�H��J��(�Y�|[	�u�t�F���t>�R,�
X|�t���D#IE��,�
�ca�~}F����:��Zj�G����#N]�#�X��}dZ��a��l�|!@}(�K.a���z���^�p�uO�Z����c���aj�a�3�7���yw#Cp�Yh����V�,[(e�����-����l������Q�D���	�\�)~����6'��h��������]e�Q��G��~�+i��-��m�lP�k��v�J�V+Z�;��tV��%3�wWH���F3�e�[�gW�]�b)�9�}���]�����i��]���j�7.Q�Bb:������x;��W��Jf���rZ;��2j�����W�g����I���x{�w�\��4M���8o����::�e�)�����7��|c�"t����b�4fO�9
����,5&�����j+��[��e����>A�0�w�n�*�U�r0L�.�}L]�N����� ��B�bg�p�cR[���0�E�`:��^�:��#SQ����o�Yg,�L{�������C}��$����Ct�jCj��\s?"���h��
C��o�g��-�>Q�Jw��]_9���6�]�'������M������
p����k���,w�t�0<� <���50����g'�7������=~�;O6���������������J7��:�����I��$�cHo��\��
��znL����o��(�=,�����q���G�0S�6ST���v�L�b-��|���X����kDahC����M\��^-�u7*0BBiro��_~P��S��az�j,�����g�I��>xL-�@�Y	�W"e�RK�����Lf5�o/��p��[D�� f�%��f�����.~���y��5�N��r��\53��7�n���g�y3���#o�q����nnN�>�<������&���v��������?�6��>�\�k������er�ky��_�f�%��������+l��UW#�
�����+)�]���.W8��x����$�Ed��~�$�^��rL:��b�HR:�/<�R��>q-/6f\h��4q�������Cr6u9�1�5�W���7�V�93���pG��,���5ATu�h��\���@�b�y���	��j����{:��/�j�_t�����Ec��I�	a��K��_W������
}^%��j���������S��%Ptm�95BO�������6(����8' �;�:}�U6c}N�6aX
�g�������p�ol��\lov�\s$��Z��'��{|p��>����
y>�O�����B[�Pa�q���H:o)s�1�}�u�k�s��a?��/Z���p6� ��pMT���}[�)�p�@����vW���������Ww���r�%��k�y�����{h��������5�z����7���F����w��:��3~Y����^fv����x�\��1�Z]HW%)�����G�nkOY��X3�8�Q������0���RZ����������k��K'�vmD|)KL���y��v���9�F��J��.OQ����fiH�}w��	���,ks�g�[i�|��5���X�d��5A�oHu�*m]�7�����������L���+4��sa8�so,��A���`}Q7(a�������������b�"��TIr��<]Cj�jG��8SZ���X�%�'���\��x���c�p��2�~Q�G�Imze��D��6��B�_ptM�O?��#�]���o.����:��!�����*s�*���Z�E�z�.������8�d����bG�2SXy�������_��;w��3����vn�y��\�������A���`�����E(��Z�"�
�����R��o���7dYf��N����S��3������qN��ZX�Ud��$��j��M���}������U�������������������PH�V���zn�k���8����a����ma�&���LAe�bJ��B9����43�7-(��vy����s�B���������H����:�Tc���8h��$aX��S�4n~V���g��t�trk������8�,�d�Fbb�f@w����,���k&�	�$���������cr|y�T@���"�����o��7p1G���P:,��a�T��������mj�>�o������t��VJ�s������_E�1�I�w
X��B~��5����2�5����(�8����~�Z	��B����m��f��f��I o�cj~�H��bCtQwI��1� ��X�i��9%RQ]��R(�U��1�VLM���[k9��w.u9��d����dF}+U��+&7n>������g�������������+�9��,�3���J(+�cuUa0)����F�xvu�x����9���k������s�\�~�
���l�c�h������Ek���5#,�Kw{�
4�\��
?��8��cZmCn`�����]���+�l;(uk
J=���b8��)>�xr���i���U�
������[�Y�\���-7HJ�<��k����d���[��6�py�=���BS�\��6ES.'h5B���s���iR��3.P+�o^n[I��s���x���.>x��������V�G�����YO���+q8RT��}�c�����l�H%l�Ha^�l��,G���3�8;.�����#=�c���_"��R�,MMi�a07{�M��m\"+����M4��;8i��m��v�;�+C������@�F��g��jH��+AI�fb��6a�N���_�v�22d�(�� ��~��U���)?k�t�(S	Ei�	cK�|�Lk��!5�/��OJ�g����������+\nJ9)��*���A~eF@VW���}�P���������3�cc�o]��-���;y�d�-���t!~R22~��j�(�
3Q������8�vS�XSX���5������Q������,�UM���YmteyO��)_]��B�U|l��-���P�&�����b�9�e���p&l��S����,��u��-_`>Z�W������m
x�bN��l�UC�jPK��r�+m8�|�\�+uf�%]p�i$�xwYf;�x�te�s��G)�W95����U�eK�B
��S]]C��UA>U�Z���]8L0�URXp�u!S�"8�M�F��W�K��F1Z���!q�
[���)�����u���B��M����hi�^�
`&���6z[m}~���%��)F(@�[�
%���k����8�i�(u��1&�jF�+��p��F�Xa��+����x����|�.��a�����6�oP��YtwuS�9H���.T����Z	a��G�1B-�31:��s50����Liet�E'@�
����m!B��)d��+�U����2R���P��&�����H�����5���u4����iM����=�(W��7n_���H���KcW(��@JV�k���v<�TZ��inuM���K�RM��:DI�@�T.�,�����mu+�+;���a��*��Y��H+���5���L�s����#5�WG�������T�#�)\y\��U
����8Hy����]�M1S3MXa%�
��:Z
�����O�
R��b.%1�����Bb��v��9I��o_9�����V���[bR�T����a;�����	+��rf�I������pX5-$	��I!������>P#��
`��.`[y��)��u�}5<�Fw�3-�(���LY�*VM����F+,���t�~*&V5�T��xe�v[�XvLyM�h@��!��v
�������e�44;�K�vc�V��t�����m�W�e��r����(�V�T-�o$
3F��(pq>R��a��SZCe�����Uo,YT\%��yx��s[�����b)�W�c���Ft��R����u��RD����w$���d���� X�?L��'W.f���`f�R#�J��ag�%_���_%�.�K�c
�TZ)�������9��:il��
�~fN(�WC#�!�_*�cUPtQCA���WC�����{��X��b�����;���F�lt��5(%����������?��"����8WweA[��<���R���r��_I�1�Q�wR\�����x-[�8?���$Om�_�*��[��pj��<pQ(�T��jx�oyN����vJ6����]���j��oy���S�1�����oW�w|�s�V`j��{D���������5�,���(q�QTP��	�P,aja�����XQ�B�S��E��h�y�LF�0�����O[�n����75���������v��q����$W�&��Y���U��}���~!S'��]'�S�N��A�p���ev�u[��H�X�:��hP��U�>�:mZ��N�-Dd�WO���o��.����|�O[�U�E�:�`��*���'����8�l�$#J� O���o���e��c}�]�������5��a^_�_���.N�����]"����pH��P�B?�9m��|�6�PV"�r+1��-�82���"!R`����I7��Q�V��{���Y�@��EZ��f�H!h!��t���"&Y�	�Z��(jj	� ���{���
)cY,����h�T�s�"�'��D4-T* �$��2`�m~������sQ������&h����fhkt�Uv��$��T��p;k:�TH�b:��4����Z��G����J������d���f���j?�pu��A��;���$m�W�������*i���8�����7����
�er=�`1�
��e�}�H�����N�:l]��VJL?[K�c�����j&)�����p)
��G�_z,:�d��>�j��*������(�(~���uB�A��am,�������a�i�T�������s��2�@'������p8z+}�L�	��^"K�#Bc�-�>��?��*���}��B�D�Xg&��3�xas�#�R�T�K��5�$z,q|�I���@!w+��+�\Hb���}r��u >��a�?��B��0a=�>W����*��}��h�u����aV|���g���������N�3�R�W�����P���LR������n���3X����~&�j���}XZP?
I�EV��������������J_2�K����/�/���
�s�0C���(��
�i||���}D�|er���N���>lEn��w�O�#�kVX�/��U�Z	���%�F0��Y'���H�"2�#(�V
Z�/)�*����`����/�y�j�/�a.����U�?"����^Y�0��_���",��]�6�O`Y)�)���B&�K}�:B�)��O8�g���&����`�B���}A�����#�0�����5V��V�LXbI/���|9/�>U_<r,�i�/-p��>V�{B��Rf�Q��t�e��]���P�~}C�noy�}��2`���h���<�?Z}�b%���Q��	����7�Q����}�]�������V��JV������a�E�G�i��)���[�3�E�t)T��^2�E�2���!�U��HO�?of���}x7�hf�N8�����@W]9�i��Y������R���}e���$%��e�}c���- w�a��>�s��q=��7���/"w> -a|e_8!�����;0������a?�}n�����hI��Z��
H��*}�:�%�&�|%����,t��	��<���/`�h�m�?��z�@�T���5��J_�����/�%���_X��s�#+����G�I�����} g��h�w���Y�/��I�����}��'��?�>A>���AY������r| U�<��e
��h�E����r�����,�0+p�
�������V��bmd��z�R���)��f�9E"�y����%)�/�}�?�x1�+�_@k�I�)����Q����|�Q?�u��`��0�>�_+I�{��-^�Op��6Ec�nu\�/������~|
�� �fL�N�����E�G�/��0)���7�#�Q�K�A��x>P���@�M�V�Q�������_Z��ym���5���M�W���T������!	�k�/�sM#��/@V���~)�������hvX�����>�41�~�%V��(�6����4<������.K�OW��"���5>e����a�	���W�z�O��k�W��p��}%X`����W0t��>�X_��M�*}���|8������R�9����T���5ji}ie?K�!���&?�$��JD
c�~�|�p�-�����y��q6�i��#i���zHL�Xh����>���>�Z�R�e�>��a|*+�[�L����QA�h?���,��A���-y������Z������	F
���V��M*}
��gv~}Q�+��A���7���(��}fQ����P�J����B����dh����s���&�������\_f0�M8�e��0(�i�Du��"j�=�1>�����>�O_)o����������&�1����CL��e���� t���pzn����a��J|����e��B��yC�����A��HT�*�c���Gy���UA��T��!S^�|P�O��,�Q6m�OV� �������^����Z��~��X�S�e�Jx`+x%�BoN���pbE<c�>�:`b���U�8� �q����g��t>�>lE��D������NOA��k�3 Ed�I�g�3���*����s{aY���V��L:��A���?u�����}4�K��gg�����0�5|��crT��C�d��J_�p�9�i?���J��Q?o��fX�"bh�iW)��~������k�K��*�������iY9����B,�oQ��d:�e�2����x�������9~����_*}�V��t�_��� �h�i��72`T����G���V�U.�4�X���	N���K}���*��*_qP=A������E�/e�c�SwT�W��+Q��2%�*}�����Q���-��V��+���I��=R�O���Y�#�6������M*}����2��job�y�g�Mph�~.���ZKu)O�
Z��lN^C�/�ex��S22�����98UA_h����-��[���@Vy>P~N�CNO����� �����>�2��������:�I�2"#��}������M�K@������R���/5X�3�_U~!��(�5�W�D}�����`���x
����x���'�F���> 5���o9{l)?�����C0��L<�A�cQ?i�*� ���e��{�2�X�%�6J�qw_����O�~$�}��!9��5RC��z�O:��]��CW�O�AD����B��/���Q��*���h�[��u<�����	�&���(�3��c�pLm#Z���Ut���m�*���=�\,��?}�n��+���NQ�U��7��'���
������)�|��^�xvs����G���Yq�=�~������������}��f����s�\%Go-vj�����e��d���7�R��������H����n�2�]}��_�����:�@�����O���W��(�\tII
�c ����(�]�����������U������F�b����61����o�����R���m������c@�'"��V�����%�����este���,f�WfT5Z��pF��vx#��=�K�Q�r�,�nSP�i�g���HRp��� ��1�.R-�:����/|s�I+%NO�g�9�z8����'��6sC��,������	����
�����,����u+��O�m�yf��F���0��a�����.1�n�UB$8�q�G��3I�h|�E��tLq!��q��H1`�4���&x�f���/�R���y�o�".Y#�k)c��!ZY]_J�����!a�����OR`��0Z�cn+��dRU�f� Rh'�+��R��+���������K�Z��������-�w@(�/V_#���;X�O��oU��)����kqyA]�+c��q��!�,������)�%����� ��
��m��9���5�!G.���)�����Xq�n�
&:U�C��9s,tE�1Fq��B����w�����Z���G�
�f��wTxY��H�x��<G�D�]r���}��P�n�K���
2;��*��
��?
�w1��8F�/��8-�������8��?Kl�m���.>����|)��99��D*�y6�bJ����	k�R�se,�x�mO6?|<��b���gwn��<�{���A�Yh����a~��N�-�c�~kF,�:"f���������A����l���I:;�?pJ����0����tf�,m?��
_,Mw�e(j
�����\�,�3�w��y��X4�x�3��k���&M,����+1K=O����6V��v��\���/�����f-+ZGN���h�|h���r�ufB��A[.��Xm6���:�\h���l��PA�9�-�gp�a��X�8m_���((n��b��x��
� +1�EE�&�.V��S�&Dt�d
#�B�
:A���r�H(�\1�fx?UQ��41�Xa+���X!)�N���1Z%i��M7 �E��/�w�]�|3�����zw"}�*#���#�X6���"l'�����p*uY�����<���M)��B��V��,�h��	C�R+��J����mi����9�c�E����C�Z�A��-(��)DXm���t�����2����h���<C�V[�X<t�5�-[.&��5�Xk4�BY!������\��O����MG��	�WR@�bz���JjU��*���#�'S�`�a���h"�
����+�J�b�7�����4�\O�c������`B#����4#F�U������	JT��!YH�������q`��<�^PIy�0�m��0_���(%-���te�^�6j�Y�����z��_'?��/��=�q`���F��`���r�M��K�J���)�4����-k�`�z�y��l� ��"��)�GB�;i3HQ��diJ�<G	0��F{7�[*��
�Q�7�m��L&jD�a-�TE�v�:J�b!%��)�,-��X��v�Q:.`�����^���|D'��o�*�hiH �.Sk,/o�X}W[�i#�,Sl������	/�B/�L��!�'^S�� T��{�mI�r4C����)��9���V@g[����{O��:!u-#��1��j������3e���d�����U �oU��Y���H�2oC�i����_,:�n
*F?���5Ai$�2���s�05]�^y
,�������A�}tf2��-�+��^�,�{������Z������n@Z���}kw�lz~�x(9�V>=}sr��������a���w��}�Z�����P`�����o����*/t�8�U#C�#��o�����������������@�[w�y8����,�n]��G��f�
������@���[���\
T�����!ju���r�����.��������������9��k���~��o��6�(�i��MYUkX�n=�d5��*����6�~�l���0�/��G�n���['�����h������['��7'�.�������R�VN_�^�:��bFm=r�>F����{�d�-�Ok�oT����^A�V��^���L���^,��V�3}�J�+#6�8}���!'+�]�T�@�AO���-��cn"X���xi�!�@��V������S~Q�z����['lul����6j}��vM��<#����3���|K~J�'c.��[�w�o�y�V5�.g�`�oW��u���>B�?�����-�[��������5������L�]�����@��I]1L�;��������&�N����>O���\�!�\�oll�;J���"�
O����E,�����6�����kp�n�8�:<�2m������"������4�".���tP��^�L[�����;�3LB��$���x'y���B����F�f��`y�x3��L_�9�����'��\m!�dr���_�W�o����u:q����/5��a�s%� ������+o!�7(D��|������2����������?�_I�E�%��d��re7�JsC��,�k�����2���#�(�`/^�u���������0����s��������M�K��>���)�p��;�2&��&&O��������s��7��8��'	|����c/��0c�mS ��W�8}K,'Z�����w��������
���sK\0�&�vo��k���1P��LZ�t=}�g�.i��ll>����������$��7�T��d��$��>Pi�|O��ZX�2i�^}t'��3�;�]/����;�O�&�e�
u�D}o�b�=��Rvg�����y�>����1CDg,A���g��8)�r�q'KmHg��|7�^������`�RM���vmd�S�c��OmJu��VN����1���JMo�tU�����,Js���5bH���=�an�_�K�'?��8_�r��5!�8YctM�������K_�igT�T\�8G,�u{����>��
��
��_)j�j���<�nP)�_h�,�Ae��H���,$��\bP0�6�L�5g���bV��y-Jw�*��%\�����U�TGT�n1kAA4��k�)���j�y~�/�'0+�����P�C*mQ��Ms��9L����������4e����Nfk��/�T_Xl
+��Q^����W�5��"T�9o_�\Y�w����Ks��"N�����(����R��VP�c��,K���Y�S��C��N�
�Y�r�����~��b�-0mm0��J.�Av�o7��-��D���)��TL�R(|�3sE��������eB������Y9}�����;�
�Y��$�v2)�*W���|������RK	�|��;�����i��������3Rr�d��]+��_%8���q7	��H�9�8��s�;��19�<B*�wqrwt����L���h�o���/1��������u���������l���m��LR���Q�������*����7S��;���:Z�.S�YC)&�O������(0V�$��|���4�>2^�,�<�.6�������L-	eK������t�MEqA]�$���&��
$������ur����k��7�������G�7 �4T=SU�F�U�jWGy�F�bR�n�;��3Xd��,t	OF����f=Nk��E�L�L�Q�N^9���Y�/�����/����B&[���T���5��d6m���g3{7W��N����C��;CX��S��zD��A��g�����#��o�e9��J��W�\�x-��1��F���0�H�
<7t/H4r8CsT��\��:��j8
^��i�Z��w ����Q�����V��Jz�rb�zo6����(F�ju-���Tb@�������=����(��ra���IAh��L�%���L�/�t�d��������q�c����VE.�\��tP�)�B. 8��N��|���q��n��;S���o�Lq���4b�C3���+cj�jH!�Oo�S��tYw>)��I
����|��c(5��s����`��c��+;�fB��h��D�E�b]��h����^���j*����i X�)���
K�`I�@[$��Ub��<,��]KK�EcN�������[���4@��@&��L��t�
a�U���K@�oKg��^	k�j���6�5�Ds��r[��<a1��&�a�+��}M?9iAxS��
P(���C��!�������Sh�bn������0-�r�bZ���_��X��%x��e���m}�oq�F�7��(�lz�E9��iV��`(Z��J�������u2_rW0{�b�-e������K[��&!���������c�����Ws��-8���6�8��D����ua�"�����#�]f(3X�b�J0B#������^#���iUY ��2$z�
'a|b%�9������Y�k��P�X��?n���MW��|��5V���=*����C>�A����a�	������xtn��YFFE~����+@m�!`����(��6�-�"�3�
��m��������U��E���l�UA�~e�k�g� 4�#I����g���x��Ds����K@����s^qJ]�w;��\,v0G�
 �Fw�	��\��1�?�����n�o�?I-e|�^�;����K/2��%����UQa��$�BZ�}��,��J�\��H7�z���3�����B��}��X�X��9 X�w,o ��B[����oS�hJ�r�~�]��_m-��:����+���RL������J�P�S|Q.�b��b�Z����2VF���K���M��X�Q�q���3���B�2%p����-�A��RQ���)5�]n.D`�K(�<#��d_��
VO�[���QlK�(��e��yxp<=�0kd�&�A��g
p�Eq���\�Q[ �k20���!{�?O��.���P��_mO���/��/.��O?�������D��=8w>����O��s�n�W�~~�uqy�����y�������g����i����������E������KF��\������9��V�3� �J������i��^u�f\�$V%���)�m������:
o���v*BV�)Dt/2DS������(9&���;c+�&�B$�i��]�j���|����;o�m��V���vE;v,VW�?$�W1B'��2__+36�����-/���WK�h�c!b��(�b|F*mf�*����Ag��
o*���[Gs����������c���3�m6"�7K[[�p����q��:�+Q��������
P�m��@��r�������oW���]
�����W0w����
�o��v��������%�{ �5��t(xd8k�o���+_!�����X�������S���j/���Bn��2
��6
J������ja�����K%Fm4XF�����o���������WYN�3��
�d�2���n���7*Jhm����i��8��u��:�'Z��\;�8�:�\z
D1]X���V��fV�Z�XY�heb�o��4���)W��iH�T�nL�T�le,ks��f��.�������qW0�n�\;<�y;t�	�6�b�[^�����B�u)D������_X���LW�T�0/"����u@�������E�(-[����������������8����E&�y�"~���g=��)�.�p��=
Cd�W�m.^�������d�e=AWb���<V�B�������h����+��6iW���������h����*���������oz���Bq�x(���"�)l���*Mu�\]������R#F�V���:���QfZ�g�_��VS�DN��1P)��i�X��-�O	�'� ���-��<���$�Rh)�Z�B 5c���}��,�E�}�^%�R����������7��p��en�^%����#��Ut������,�����UR�m���`����Q)���h�N�U>��p�A�& ����o+�|���D���Ao(n�})�u���
]"~���1�U�W����,���l�6�������USBM������A<d���X$�J\��c�6�o����ex�mV�}�+����e��1���[�ET��J�Q���73��
6���9��#��ui*��[�j��[���,[���,��flk��zlk��Vlk��zl#��f��M��,3f�<�,���y��_����c4#4��b I�%�`&�Q���G�����(�[�Kb����O�"�#�"-iL���a#P��]�#�G��*��v����/�x�#�T�4���0w��O�������Zb��
x��+�MV����gX���Xo��d��
x��+�mV����/�
�L���@-bo�E)k5������rG�%jeT%�R*g�����t$���P^��2�e�<�z�V�T�%i14�����R��!k��R'����%?�-����\H���S��s�H(n*j�d1���3u�%LU4�^����{	��6�K�����"=?\�)����Tj���2Wq�������*��yno��������\�]Y{�0`��Y#Y�CTH������=D�%�1��C�2����&��L����'�{H3CU�CF����h�b�dBm�!�9k�x�!#Is�2#��
���!�Es2[	���} ���@Q�o=$���o����)������4f�?JG����R0=i�Dsg�2����k=�&z'�
)���z60k�����M�zHZ����CZ����QL����V�5O���Do}i���������|�3�{����<d�����D������L���ii>�T�K���$�?dx�C�i�~2B���4N^�C���n4��?�-������	�fk�t�!���z�����0�F�C�I�z��%�dR-����Q��MTd�]�!��*��0��71e���������|��?\�6`4�����^��=�y�����?�$o�o�!���]`)P/�{H�6�o=d0~���{\Z�{�S��x���]�0p�CR����C��%��C�y��5���IK������H���@���9��{=��O{�f�C��������0����LI���gs]�����V����nSw���C�Y��+(�X�Wi����U�������?���Z�/�X{C��b?f�����\�	�|�����V�������n��L�{R���2�&�0e�9����V�U�q)�t��4���w{	+�4�:��UB#-RV�jV	��%]�M��w�2CW��Tt��U[M�u@���r�b)�����9��f������7\wX��8�sLd�EY/��Q�&�v\�t��;:���$���mg��T��S������������$^)���i��X�0.��0#�]fFXL6�)�����s���E����kFd�b:28�m=���LZ�t�,�e��!��h�� \:��CgS#y '���=:=��^���T��'��u��V�
�����^���<�����O���?��
�':�\���Q?�rr�y]7�<�����?���D���!1	#8�<��������h)�06��qq,�����JtQg���c��!���/
L�Q�/���v��v��=IE����W�����eBM���%F5�j�j_Z���0W�`���bb��j�W�zS;���4��-�b�����N�M?L�/��tKA���X��� -n�_~v`����/k�?�7f���P���/5T���	���_E�y����5�Fj?�_qk�F�-�[]`|m�������I�ip�Vb��Y��F��`*)������O���&������k
#�� }p)(��"�cD�`��hn����NLBd����(u�-,+�����c�{r������E�/NN�M�L�N���Y�.�h+�[�\v�����e�e��@����XR*}�a��45�7.5v��o��#��n���������&���`�{8=��1�eqF��I&���j����-�{�g���Uj�3Um�v��s3��+\�r�������v����F�B>���V��)mj�d���������=G}�Da�2��iC����t$�S/��l���NIEK�����#k����c�u�����b��UV�\�!t3����
����� 1��R�L%�3�_?\W��]��#3��*��~���-=n*���-�e�6nA�����3�;���>���n�����"���7��f�`���>��}�g����U�je�5�����Jq�9�KqY�X����!�:���rp��+�X6p*_�������wuI�T
�t%��['�I�����]U�.%��Z����FG�$�,��D-MAEv�e#�1"���h��P�6�$-�I����e��jt�A�3�x!c(J-�c��o�:��-5V�A �X��~2�)����e4��oW��UK���ov.�/|(�N@�+v�����#f��2	�`�;������,h���n?�R�*S����t!ZJ5*Z�hj��[8����a
pks6����h,�R���5���o�p�&�
A����M�h\��P�FS3�$�v���H��m����4!�*9*����U�
���J�e�y��A�Pr����W�1���Z,�f��N, ��������Jzd�������1]Ku92F�*"���{
���eNJ��|�X�������e4�Q�;`���^4�R������j$��(��r��� F���nS��aI��+�U��H9�Ii����0F$Lv1��,1���|��3P6Re]�Y�n|��H��6��D9��TVp���T��L�(Q���77Y8X�1J'�C-��N��p��9����2Y8�����]�,
�6I��^��v3��(3A@�K%�L�Z�!t0��}������L����'[����A�����|�ib#�0�e��J��P��nZ�P�
"�x�I(�nV:'#�Q�����JI�B-V��x)-�	����W$�RRBB��KU����dGTeI0=_@������6��[���+a�L9wa�Y���bYb�[XY��.�R�~��(F�����J6���2�V�j���k�=L���+��[����[��\m�����|\�7W�r��Q��U��������R6f���E�.���3��������4	z=@Y��IZ�k���n���Kw�T������!�FK&����KE
����0�����0E�D��NG_��$8J�$�!#FJ��8��h����@�m'2�N`X�����C�Ta\�����6"���[���[���D�ZX�YV��TY�98:��~=6F�(�]�����q��������b�V�%����BFF6.d�w���1�VK& �����W�(Ys�	�{�������}U��[�Ppu��V	��uFf�'���l�]�M�r{�FOK���UB���HPK�����B��[���]K�������oj�:��D�����=�Y�
���h1R��B�a��������o�Zt?�q����Ph�
jK���$@�����y��)'�<qmk�r]�X9�
�[���$��+@Y0���l����t����7��5�*B^��y|������R����3�u��v����s.	+S�s����?�e7��1T��*j>���m�4���IY����
���d�A��J?*BC�R�=:W����!���V�~�l��tz�/��G%z�0�['�
_�M�
L}[�����2�9������]}��R�����k���S���a�`��5(X�d�]�(��Z��8�-��&�����FU��-��2��t���1�6�-�-VFlR���'����\���U��C�g��~�6
 ��Q� �������k�_\%��E�U��U�����t��~�����
���J�;Z3�i=j�f��o��L���:��0������h<�iv;��}�J�-e�M?��-�:����7����x�����}�t�ne�w��]�\��vE��n>��`+9��]�I������������
������;�;�k������U����UTS^�*tc��"\�.�*�!��E��l�����\�\;��%�X�Y�9%X*��X��.�3���������427���[�T�d�7a�(�	S7�Nc�!1���I��1���h���u"kn
L�U���	���5�\0:.��@d��\E��]1��,b6'����R�r��3���H�U/!K�0��D�D ����������)�OCj���WbE������SYX�u08��P��^�V��mNtw����No����U�Xb��"������H��$@<["
�m�k�';�`�N?�.�UcQ�8?��2#t�������M0�w���b����F9����8���'���pd��u���r�i���d�8s��Z���d�	ks���S@�1��%���NF�^�&Z.^%0@���~��x^����HXae*W�;M�,/��.�����u|�4|�L���1uq�����];�*�!�hW���F���J	��o-sTk�51�k]H�j{WH�b|	n^#���D�$�����������6ee1��e}�
���`F�[i�J$#J�c%��dVvF�0%,pN���r0.���vb��l�����m��i�3��G�'J�4����Y�����"����2�����r�1R��C�
�V,��A
<d���`F��[A��H�f,�H�P�5Xo2�VF)w�T��EP3�� cq�B����&&.L7�6��q�!�.L�����#���NK��o��F���x6\w���Re�jL�X������"�A�df�d��j<{l������R��U��@�����@K�8�MF���*Y�u�\�/�������P�z��y�������Y�����.wvws�i��?&RmA����I}k50��[��	l�Z�����r������li����?��-�fC�U���>
e6*JrnZ\���o��I��G*k	-������jJ����g
[L��� OpR+�?K����O-��Q���Be��$Sr ���E�_k<�|��E��7��G��x������z��5DpezE���A���!cuARZE�yf���f�����b;����L>���W���R���Jg�_e�BK�_����6/Qfp����m�0,����0�2�e���_��@����PV�a'����3/�\�t���-L�������jW�V��^t���x����a^�]��l�?�;�Hi�o��/\X���3h���dF����c�3�pV��W�����T o����Y���u�L8B03�"<��&�FB���lb��0������%�oi(VW&�U������J���0��`����+oI�m3l.5[�y���q��!�w�����q����x&5u�Q{��������)���!����+V�I.�]@���r����a���t����wfYE��$��b��&�:%�,�����F�_��vYJ��z���������[��Cy�;�0����a��,~����.c���<+[@k%jU��b���p��!"�*0;C������6P�6J��n����0|�������S��c��-����'�i�r��T<�j5L��=����oC���}��lzW�z�P�H;o���8K
�I3N��f��t��&��y���c}�3CxJ��/�&��?��vN���������?^h�]b<�r�Um3t0���v���J\�QSy�%k:d�f��B�Y�I����Ti}�T-��6;p9�=z�p
Rn�/
Y2�Cv�����X��3e&��Dr#���9�gVTq@�<��=�A��3	�u��*��]$�XV�eR�����qD1���%X��A����HV�������b��M�G(�)@E�������o
���5s,	`�q	�-3��
�	�P��n(j�X�S!k��N2�f�W�����BQ.u]��;�R�bC@g�������g'����g����wm��1��"�6����5���&>��/8�=�?�w��x�k�O/*�����U{o���+��e�����!]�nY�?���!�#z~5����������_^��:Ua���[P!.b��r���t7��N1HV=��ci1���Z�&�JS��%���G���"cXT[Ve��R�����P�rQzh
:���n]��k�cJhF-t5KkB	��j��D��1�~�H�.5S�����
���ZG)G?C��q�����R��S���g��Q���������-(&��J2�GVk�b��zu�!��R`F�P�i�3���4/T�q�p�W@�����C�kjH�R6w8�oM�x
��a�������<C`s��R������`5��TRZ45B��?^Z�W��_j������
��
=zQ�+`��d�� �rd�q���d��6
��Q�g��I))(:V��i%�:��a����
�Y#�>�Mt=3���r�W�<jTgY��Ztu�N/�hc�d!3�^5#�m�z���E��8�����0i
�E���1��o*���������a�:v�{����+�;����M�����fb���������2-����ft5�0��F��eU<VJ��y�,���L�WF�qG�2��rLf$���.�z�@|H��x|d&]�cl~#lJ��l���SR��aS4X�jaS������O���s�Rux�?����b�7�c��&��'��'���n�����>���3��p���������b��
�f��P�A���?p���!�G��U���f_�� l�GPl�x j�FX8����dtE���x��B���Q����9&��k���[������Y��t��r�O��-�)��@�@U��
t��KX\"x��t�4�������rR}��7��Mn�|v�5�������Ei���B��w�V�Q&U]s���L[Q(,ukn@���]��U�/�o5���/���;E���[w��^����m�&����v�V�%<vn��(�vi�� A�:m����r@��[���
�����(\�cEM!L�am���d���mQ�3~��?����w�_�O�������_��7-�y6�O�v�O���)�?W~����_�67�=���exAl.�t�N
�ZF���f������v���
Q�s�&�Mw
�Z�LD�
��hFF����p�
�e
���>�sA����6��+:m��j��[�[&_�8����KT����Q
�e��Ak��x���v����W���Q�_�}5pl�}7��}R��L�s_����H59z����E��1�JWB�b�E��Mo���&tu���)0�\Z��RL�IU{e\Qv�H���{��xqr��s��ou�k�2�3E3L�=������U���.�Dt�����=��^���-U�\����������w�o,Z�i��LK�b��5�X �E�`Q��:��
/F@�S�hx�w�}-���0%����&p�jY���O�o#e�-�N����05���J��f��X`���T]��a��C��Y��4]'*��c��|������F��5~�q":*[��swe������S)�}QnC<B
�HK�-4<����J��_��^eR�6�&�S��I����%
%�f�gpdHa��y�x{Q)q��?u�Tq���q	���4�������i�j#��Ew{vjs�bmcDbH��Q��Q�Bp��.�g;��
���������l�v��2��[�P��+D9CC���r���V����
�~Ud����������Y�����z�+f��8�^&��N�����=���Z����%?��q���e��Y&s�r]�$jhag��/_\��k��}6�4��<���B�l{J$x�
�5	pQ��X��[�z�^6��d���,i��p\t�P����7���^�5��RbB7��0?�>\��u��(V�(/9� ��3-g�*3d���j�)��U����V����,J"���)�0�T��i/8��������������W��/eZjO�G�F����%U���l���]�6rC�M��r����Y��|�YY�W[OKQ������G����x�HV-y���-�~v�7�y�
M���t����������c�m"�T��B����v�m�S�2t���9j�R���]z�U��g�lg�`�'���w���#�J��;=�3���-��|_�����2�o�����9��L�Mk-���5z%m�v}���������cT��?@�:��C�9����s/����<��d�l��w���2�1�|]J���^g���d����yb�9�X�����������,��u�u^���]Ov�l�]�#���]<I������6�b��o�lQ�4�����0�lj�q?d��s�w�o��OA�~�d���<��lm��%@:3�����t3���\������7�����
R��V��zz��Gw���1y�|���gm*��/�&������g�������~���R�f7�<8z�����;O^=|v�����Q.`�����X���I\����K�����ir~8����~�������j�-�����&����O��h�j.��d�L3��S���m�L����>~9��x[���G�j��)`D�[)���dk�����h�"'[�[��H�K�D�4)aDU�gKR!��=[������ms�a��������}1t��}u���sp�����4mI#���b��-�~U����������_���x��.j������x���<�VpX����r.D!yk.�4wI������>��bq4�B��{y�����z����xv|��o7�����3A����]����fu��%����
�[�
i�0^��"�S�*ic���G��>j��#L	�Y����������Krt�E<;y�����#�������
at�P����R���N`I`���x�0sy|y�b������d{�u\[��������d�iVv���2a��\�z)��)�������o,��)��yL�o�~s��������{�>=yq�����{'�E��T�<�u�.����p$O.��>n�P(z3=�\4���N����^1b��^{6��q�C�����cu�i����M���a�u�F�N%B]����/*�)*�m8�������B�w�jr�w�)�h|�������Sr���>�����
�V�uf���a�\����}P4}�.wPy����xA[g�R��=t�,�T4=@l���><��v_�;��������On�������r����O��G�4�S���I|�?V������[o���h�1��@���
����dzVKi����s�r����c�����W���^=������/�O�>�=n���b�2�����������uz����m����:,��r2��0�����O'VO�-��y��Onn���B��@����6����7����:��+=���Vv$]��/������Q<r+��pD]������H��&i-W�p�\���������3�H�X;Y�������������w�s�)�� n�#������V�����`'��Z��@*����o��8Q�����O������t������{~}�~������
% �9t�P���6�������7w����G-��x����f�i-�`�U�����hlK�j��G�7����������x��n�?�����w?<=�.*�:�%2z�x�-������'T�[�H���wwD���8;l��m�RN
��k���`��HJ��������?l>�����>�����A���un�������N/�;���gnZ��W���*8U���7[��0u������������\��g���0�g�\<=��_n���wO?}�x�}�����2�7zY��8�R�Q�u�dZ����dAz�����eZ��Q�����q�?���dmo������C�^))b
�����q�x�d�@P[KNZ�X��@�����Q�7}z��>m��QgGw_?���_>�#������]�y���?5R\�<K��x�a�
"+��zW8�ZT�]���
���P-x�"K!�gcr_�i6;z�{ ���s���������wh�������~�wZ{��^P>@8��ZQ	[|4R[2��"�qv����-�O[�/�2��Ex�����9v�����
�\�{�$��r�H���9�6Rt�X��%��H�h�� �KJ��)j�McM����^��2�}'E�h�#v-HK\�l9
r'0li;c�o��{eqS���a8�����S'����(6�E�V���n����v�����/;;��M?=?�����	t]O�.4�n����!:��K�����
>�I�y����R��@�X3\��6�6��70*c�!�V&�cPY�L�{���W��n>3�6������6���%�J��S������H�d������^�&�@$+t����BdS�&E��������o����e}�HK���^g��������\5�%y)@O��Nk���n�0
�f�k��l�p�MK4n�����^�$���C������/�o��wz������o
{����\WKk�4�f�O0E��"����(�����R�JR��p)��@�z|�������o���|{�^������wo��r��(4����t
��OJck������/a-��?)����z.�4>��F�_�����'�7_������(^Q�n�Y�O�I�;=���h��9C4���T�i���O�����5a&(jY�$��C�U*�Hf��5rg������L�8���}������������ev��.�v��nP5��3`$����]WZ��6�T
�t��L_��}Q(��XW�)-��z�^}��������n?{~�w�]n��w�Z?l��PN�P����I��+�hIZ����`k�bzV�Y���I�� -9��_�M>���7�^�~~zp��F��z�}������[�6����66��L�}jF�AD�����Q2U7�����X9-�+��(�<Kqc��.��IVV�{Lgx�[bR�_\��p�����vv��_��008s��������o2�n�'�h:�v�Z�n0���IM�+*f����SJ�xq!��������
����}9���������������'��'�����C�v^���i����/�c��=�������E�)���c���_
�M�C�k-&]��z�s!���LI9\-P :���yk*��:yw���������<��}w������~{��|��N����S/��}"8��-���t�o�)��-R���BJ��(d�
�
������/G_��������GA�R�s�h2�'��4�
���l��e��*�'��)���Q1�F����^�����'_����t������A����!���2�8����x��bL����p����pm"���q��~"���vM��g��qxK�hlxq����G/��;��mv������(�0wvq������2��@������b������[�����/v����H|�f������>=�f^	�LAM���+�������V���;*��&'{�+,��u��@//H�^ �W| z�'j�J���/��v������.|��|�=�����������l���Q�-���a����b�������"�C1G����-7��
>����Nm���c��&��}X;zx,�7w�v
�ZE��2�����7�r�.x~|r�%���^u����-���D{$H@5�{��N�����}�{���-v}J_�������������:w-m��"O<����&�w���m�QLTl0=K'��c��4�{����'�G;��������{������d�������?��"��5�^���I���h�FT��d@��'>	���J��O���f1�Q�'>���~g��o��;��m�	�{q�����_>������|���.W�*�^��
��� ,y^R���O�'��B�]F�U�=@5>��:y����Po��w����Q$�&A,v�e��������d���r�V���$�yp��v��0���� F�
����A�����{z��u�����sv{����wo�����}R�)�7�0����[�f�k���X�f�����G�sq
�E���R�gWVd���-{%e����{���X{�����>����M�#"�l��|��>a����N�{>�����52	��jj��:�Q�	�b���`e�&$oY��������9���m�wE�p�Y�0JZ7i
Y��h�p�������������������=:}�n�����-�&����e�;�%�������~�bX!��z��_��S�0��A����6�w�>�~���'�kw���W�Y��5AW7G����{�T���7cy;{����P�L����_�������0���S>:�����]W��<zz�����o����N����Go��r�v_I���@Q�n���J�G:�~>��b�.�6_!#m�Z���n�`�97g�a�:z����'�����_?��������_�R{��^	�@i@:�rB)�'�����pIZ��6n�@�.H���UW��E��R��f
����OGw��O�[��U|u[���3��-5z2Q������x`bX�d
:�8�	���?+��3{� ������?��q��}>�wqp����������E�k�M��4C	,�i�pW��$����f�x�z��D���'T�����j:�j���-��A�xN����7>�{2�o_�'7�}����u����#m�Y���[2��iM���Be�{Ms_J����B������/���T�}���'GBm>��>_\�����g�n$�`��T�A�6J�2��{�c�Q����j��,�����P��kx���m������:���������������ES(���_����I�P-��qX���R�r�i ����u�������BO+D(��A����k�r���0�w6:���QK��������������d Uh�b'�x_T�9�k���b��k{�����]����w���{�i�r/���%�?P���u�O��#��H�R�	�zUnW�������7��f0�������/���xr|��������w�_O_�2�������������_�y]`�t��.h����
��`�`�5SxC5x������y�b��������]����)i��]��:�/.=��xV�j
���7'�1Nqj�AW��4�+�Vf�l@e�2�>�
���_wc�>z���T|P�N����;w����3%��}�TP.���f�e��j�1$�C���MPI�9
�Uka��i,ln���aLO�������������{�n^\���N�����W�8�U��l2�G=$Q��j;��3;�#�������;x���'uw��s���k�����7	;���uK�#�������3L�Y7�6lE�������`���Y�k�V}��1��f�����s����?��\3���������n�|g������^���W�u	{�3w-�/&�'Le�&�N�N��]0C �f*�Z
V#@M�������O�����k�O�M�u�u9��_���M5��5wj�6#h��yM��������(��Q���K�URN�Y��S�/���[��X��_~�4w^^I�����}��h���;��|���������/�rHKn�����gw����L�"�pk��I�IloB�<���
2S�k�9s��Z����[goN���Y3H�6���9�hT��Z_��0-����BP���Gcw�<~���������v>�l����C�����s�ho��(��	�
�.����_�������xa�W���
q=r%��Fa��'�G_��F.�M���oM�=�G7��w�������
��JX�v�l�uH��@ :.��{��#��5�����
����C4���f���������W��}���O�N_?'/������koI��N���i���JI����"������4tCi�i�W�����?�9�3(?�{po���~z��x������|u���}yr�����xGB�3��w>�C�$�\��!f~��
u&@v���������.�}8y������?�|~y#���xo��Hppu���d����������VZ��y���k���6�=�}���C����Ww���tcc{����(ZN*at�0�'�rw����O/>Na������E%���� T�^��o��������/���~�<�z��i�ky���I�%d(3�n}�R���ez����vq�H��f���s��1����?~����)y�qk�����vjn��E�m�Qg�Q���w��7Ud��@-|�B�������-����s���Nl�y�����9�=}������T������U������3���N>r(�uxT^7���YT����1kZf��+[�6�XM+W�=�U����-k?=������������O�.)����G1��4D�]X���+����^�6=�j���@(��	���L4(&�a��u^��{���;����������(�d��|�^�:oE���v�+@K�X���C����b42�����2�����fF�*|���������W�b��)�}�����W���c���9��[DE��^��I��c��OK�/,�o��>{��o��zs���*��>������s~�k�+���n'�/���<��g
:���T7���lL<�r�w
fwk}}���������>�/��p���Qt+��y���e�V}����+
�p0�������h6��+���:��5Vi�����g�����.�������=![����m~������������0&�p������h���s��m&����yVaMk���PZx� �Ggs�(��"����t����[K��o�c#���S���x�azrp�����m��z-�K�JExI�z@���C�2M���f�O��;{&����vM�O��9�UvS��)6�1��tZI;
��}�tQM��z<���}��O�x2V>�����N�j��k�i��{��?9������_�M�?������������y���yPUV�	TJ��E�,b�8���0{J����UV
��%�:8g��;��9��������?�����g�B��9X���}f����!H��e�*.rZXY�30bG15�t�!��JN����������U���Z0���g�s�J������RTn*:^���\�Y���7o��q�������S��cz�LaI��o,s��\����%��{��7('B("$�7�P�FB�JE����;������������:zf����g;����k���R�����_���<>:��-ku���h�i����K��|u�b0:
W������W..OVM}_�����C'��w��^�L�G/o�v��*�$�\G��h�j2���g�h������l69��������7Y��M�RA��#4}�f#����G�v�� �4�R=-)�"��L���_XX�V�L�_��1e�����	�]�I�w��}_^0
)�����i�����������+��c]wr~<]���\{�,�/��y~6}�{��%2yu7�}3���������7��k���@8Z�)GrF���V���������������SqN��sNq��a/���Mj�����{nwU[j��gb�p
�z�E�z�>���U,3h��@�1������fI�v��V�"����XFzEW��RAb������$l�{`�
j�E����AI�������j�4Ua�K����Yxn�e���
���GW��!�/�5�w����l;���&'==c�h��:��s�
�����C�%������X���hg1�c��8�����.:�awmjR�/����	��W�[,j�e���-�&���,���KP9�fY��0��l<&���t����w��j�v���N����=�Jy��]�6�D����0�o����y������z���;.61�����ZAcS�,��h�����[������w����u�q�,���������vwW���1�U)<N�e���5����C��
{��/���3������ f|l[���<3$l�J�a��un	���������[�f�,f���<��1(+4�v>+%\��?������D��0�W���=����z�f�/M�o����yk2;<�������~�U'����Bc~�h������0�2S�R-�F2o>@p��Lpp�.��[����rO�e������mA*����c1Z�5k)i,�V��9�^���R����G���LP�6X'L��������#��+�8������� ��G�R�|�8��d������`�r����6U�ge�'�BV����.Z(ED2"/�2lxM-���.�kY7e��]_��/2��d�
t��f7Ll��>�|2�m�A|�r+�c�<1&d��*������a����`�`���z����=]�/aSj�ed����,�GY�7���D��;���'?O�?m����V��L/��F�
&�.h%)�k���>�W�����X�*�j��bQ��1���������k4�::�������z���aI3���c4��IT��N���{3�k��\Q��!�R�Ju{���bn���D����-_*������=gn���f�M!� �������q��pw���W`���;����6����lP�./�N���&<��\�L6>L^^������J0���+�LTR��U_N�����JG�����R�[�ot��(�:�ah��R���:F��[��6�_�2j�0��Zq�8�pYG��&�P)����KA�y�6,=�������jx5�x>�_\�	/V������{�O��bO;$�a� ��5U�V<k)h���B�X|��	eM���@�X�����~��1��Rhr�]�qu 1-a�����'�!�cc��e�G{����*���",��dH)�)��<���T�,{�:�!c�9@&=��1b���2�p��e�����WI���J�wZ�8F���5;�
�~��C���YY%�����1!D��_p�����<9�QD�'�(DJ�N%� ����^�$��O[r%���m�c���$v�\�bF.x��:��0�����piaw�{�w�1m�J�H%hgR<'�}J35&^7�d
��c���d��B,,�G��QT�QX*hn����R#D��O�+^+I�Qm��IA��W-X?(����&�z�PjU�cc]�@�F5&���/��oT������)	&��Bp������G�W�����Sr�^���"���=�_�)�}���l����'���q��V�t�����������}�D2���'�����ct��UaV�]�l��UaW�7|��2�L����������H1�Z��1����!e
��3�X�oN�Q�H
��K��<������K��L.e�q���kdN�0��.�#�VF.o����RB��Q1�5Fn	�k-�D�������� x��~sXp��t�Rx�B�)���B��	��FJ������JI���
��c�0�l�%+���%��/���!��M=	1��M&jw����	��Jl���Ca�]��>�fHwj���Dx&
�E�G�j���"��0�=���"����_^�����~�����g��8����bBF f������k$JX���c��S��h�)�P�)I������B���$��*��/Og��Jc���7o�0��twk�������'�I�l�?�x��J���Y+N�|�y�e����������,���T>�����c�'������sy���3������m|����%#Z���4
����h���@�x9��BK�3&�G�����$=�S��&���G/'x�����)I�$���:�p��]�sU$@D���^g�g�3�~H�r��xT���h�h���T���3T��|n!Ux�ptF�B��n�#��������di���J7��������[�{1%�]U�V[�$Z�f����Q���)�Qm*Z
�`)�
xeWO��x�#�\p�Mm��S&=���/@2�Cpn��`:���.D!��k�����h,f����*�4�:FR�:n
�4$,��m';Tj�
Rj�13(��R��Y�-�������"������.SM�h1#lXRd��S!���0^���y�/250�4�2��T[L�g�38�$� i������g��S/9 ��-��A����i�'y�(���^
dNJ)�Zs�b�]�~,4<�_��W�R�����^��~3���_�<;���->9����li��x���pq��~Lh@TA?�uA�����V������*^����_<][�]=z�����Cx@_�������;u\CN��#H�����[{���Mx��dWH��(�&K4�d=~8W�*��8�W�8Ee��l�VX��d�I��D� ��V��',��A���Yfsb����hJ_��0��FB!Ys��p<���2Q�,Q�i�w�l`+*�Y�g5�������4�P���VM<����{��c�.���U[�rQ���d����>Nx���0f�nQ2����&S�e�n��H�^~��t/�'z��C�����������~��rsv������}}|��A���`vyQ%����x����I���������}��P������W�/�����\���Zl�*e������'���+����������_7v��'��`w�I�����������?��v�[���l�k�Z>�k���er����[o���E�9[E��������z5��02t�C����7��ns�����<6@��g���J��c�[���V�
����>X�}�K�5���PoU�*qv�����j��6p��3�|'Gr�������?����Z�)J=�Z�D���?�_�@k	|�`
9�Z=%}3�Z�G5I
g����������R������/��������Z������F������H����C_,�������f����Vy�'�7"������O���]��e�r��6]u�k�fqEc��|����X8���`����Yb��6���m_.Si�ie����!�bN>�T��;�7��2�z;�,dRi+��C��:hA��>�DKxv-�4��i�+Zki=��`?H%&��{��6?�OJ�����T8R����l��K�c?���R�Q%�:���qIpi��go�h���I!"TX!SB��J�v��0y���M�6J�YqD�
dXH2�+ZW�B�*�$GJ��h�ou��+��U��i5V���n�._����^|}B���P�ZcD���1�>&���w�$�A�G�8���?�'��#B%U�8��)�v1��93�aD�%������*Z��g��li���^4���B�&���.��
1a����N	���P���X�����dZ�]�YZ`)�v�iG���&�%��;�,��+`G����[��{���vQQU`@1��6
R�TH��uwu2�)!����J�A���[e�k���R��E��j��
K*�q
���.�'1
����ZBLi2*�w�z�� Q������c���Acd���+V����B����F�)�tT���5b�A�z��U�:�*o
e������G?��Hyr�����Lw����������A�e���E�����f���t�%�y#k���4-�n-�R���
>�r�����o����#���
�$��N�� iK�P*��zD��6�e�=��tW�-��R*B��yTt;��U����M�Bs�	���[:�6n��iE�<�I[����j�a6!��G�����P%�;�rxG9�U��QH=�����h[hZ��i�m��:�.�����]l�.pkP{���CN�|��Q?*���`����K����^�3����41R���Q�x�I,��kS�H�C�K;%�C�����C��iP�Q.�������j�$(�B�,O��1��&���c����X/�X�cG�R�<f�]�����N<4}�>Oo�oK���vq�%�LV�lCm��1L�z�������s�yD�1$H��>�9�����%?���I��Z��:�n��y��f}�{\bF;�y�T���S�a	�0$��������J�iYv��>�mk���^�yW��h8�����9��I������G���N��Clu�n���BL�jn��]�sHX"�F1Y~��@�F��"R�B��y�6���?��BQ3��5�g���!���u�N8�L�g����S��@0�6)��U�%��
�3FTng�����8�!Oe�Dk����5o��xz����f,�;�3'u}�������s���>8�[��R�w%w��w�m
��1!��X�T��`����|�#��+l����������������_�2z7��|U��X,��:j=��	7��c��]�4�~��E�^�V��:�4\Fc���h��������89�f5E��D�Ez���93�������p���r�lC'��i)�4��Rr^vo�e��#)fh��M��$,�*���PI�&���������I�x�R��
�'d9�T��<W�%�����D���D�:|�Y�����R����+��O������-0!�i���S's�J��G��f�a�����P��S����}0�k�'�|������L�c���� G�*��3���,�\[��%�gd#&�4N�XL��Iz�j�S<������):N))����\�7���d����#�E5�������	-+%D�9�������\1�j<6����M^��D�+e8I��+�*E����W'�������#��e@x�����Rk1r�S+i����2�me9����J��X��S���M�Hu���V�`������� ���h����(G4_��Y�+|8�?�����W��nT�+S?ZU������jV\.��b��S"k9��B|S�2����"l?�sJ��/G�V�o*\��Q0z,����j��,�[�����ID_@�I����jD�@��o�[����k��W����(G�4�*;U����Xr�l����E�'K")�:vJ��~������y�Aq�9{�,4���j!q(���zv��(\F#p�[����J���5�LN���P�!E�E,��'o�B[3�z���p9��iNd����26h����I��8	�P�
�[��GKKE���e}��w��#���L3�+a&����Le��8�K#�
�z`9:},n�h���CDS?�)����������G�L���U&[���>r�X&�aCK%/c)�9���`��YQq�"�8��X����n�!�������������I�V�"�� (2DZX��\"�F��J���Pvq�.s~��b%�A�����i7=�G�L|V�E6`��CFSlm�F�R ����I��^tK|�~��i�{��l��F�6���y`4�	
�=r}p!����|����H�>g��F����+������ut@s�	0>����|�U�D��`��m�|��9
���Z�PH���*��-�?|/���c����X��J�Y�;YU
���~�_�q���Y�\T�Ih����F���k�u{i�#��aFH����d.�i��2�>s�O��_o?��y��B��V`$T�P�����!���,P~�J��(���E�����Yx��5D!���n��M���fI
1/��VvdVf��������N:����EOF6p�u����[+�����.)����.GgGf+@n(%�5
�$sg	O�H�W��*��/�\G�7���o7��z���-���/y@��3'r(eXf����%\X.�����p���hC��\8�y���D��(5�g���\��nkl�Is��6Z�8�G6f�[�gZ�h^�gl��JQ<&�8�:��!���}���Y�a!t-�,$�_�|��H�����b���"(���)R#�6)����G��%�����1Fn� �l��G`�RlP>/X�dg3~__�����M2�������V�LJ���W������^�$�x�+Z������=��vO���H�M.NA���)�"��K_�Byh�>�%Q�����.��?�������n��=�;�zE�rc�:��r\�f��Y6(������|8�!���V��n�"���N���)�2a�E��Z�d,J� �pm/�)p*�E���"g����Z�������G��O�#g�/���}�Nqe<x���+L���]�.����<Q=����1
�}���A�'���>�+��_�V�����������R���Q.:���GkJ��0�@]������j��tap{�>����������;oba�9�
|v���"lG���4�sL��@�{�%�NhGa��w���k@�hK�<[���0����8s38�O�>������~�����7�}���o�~����on��F�c�,=*�OG,0-���l+s�:8��f0�4yI�p��p���$>��ip�)�{�����Q�{{�.�	&f���������(b���6�k�����6H��U�'�~��L���>#�I9[6�/}��T�i*Co(��P���4��c�[�p�k��^O�%ihh��(�u
�F@x�+�-�_��!���I���v[uL/[6���
1�,$�5qJ�d��$;m�y�>O���[#a����T�;���!-�z��O}C���=}@f�Y����,�LZ��9���*������u�*�����-����r������p�]
o��g}�p6�{85��
8�TC�Pf]������P���,s���g����M��S���2����/Ftl����03����	�FN����3��NLL���OWq`w��g3{R	����u�����������iHw��6�{�k����}���������qe[v���t��.������?���o�xu{������^��s����Z�l�����qK�L�+� mQ����vN��F!�������}���!�oU#x��*�G1��.��')&��r\q�T���QoG��	�$O����,�tM>��J�����",��������������\�x�c�Vv�(�.Q%h�s�&���yE��,��4��������|q��[�4���4}����d`���&��X�;>�K�� ,����S%�����$��Vb����l�MoT����c��-
?��t���k�M�Swy�)��w0=��c���9t`�d>
0\�c
��)��)�����V��!^��+Jw�X=}Y6�RX(��\n��v�t�r�c0�n���+7t�j�D���2�0ux���� Q-�����R:[�U?A�J���:�Lc�a����p*yH���4��Sk?��T������������5��t��'�������xy�G��"�oq������,%�?%=o�'����_�I�`z"�%�m�SV�Fkgzu�w��~����	������78�d��7����o4���F��.������R����C���U��������H$��������������_���e�ze��2n�Y��e}���9��w�\��L����&����Pz��_=eh��4�Nd�{����.��<���6>�;�(��������
�[.77�|@����R����-�V��c�N�0��.��R�����7avx���R�o/:�K��(�
s��`K�52�g���x�3c�/o_<�n<l��w�N���7Z��AX���f��N����B�M���s
+��R������{F�������b1[�(\�%�a�<A��
^4�D�b������Q�~Z���x��/�P���T
�����Mm�p���8(�MsA:������=��khX�;.��|+I�9�����G�����kp��K4D��v�����	t���"����!�q��)E=o���|��8�&v83�C�WS�h��i��=����+�6����@CWv����J
$��c+�&sz�O���J�!->��O�!�m��������)��3��q����IX�,�Y���Ac��T'7���Yg����<�����e#�a��[\G6�������Md�q��M��pdS��IZ�.��i��t�u6�2%X�����1g�����,<@L���=8z��fG1��b�r7����
�9xJ�T�(�#�7.����*���K��kd�4���� u`@7���Gl�>�1mO���}�#|M�?���L4��#h6gbZ$>��i#�����K�hF�^��ps�f�#|�B	U�-�)����e�������FM�B��ok��/�v�{�X5&��p0u�A����|����{�3���!�?����v�1e8=Nh���)��[a�E�La�S*�x(e,�x��t
q������KK�ed�0�Ii�B���)�kud�� ����9�Iw��.$��}���>26EyF���5�n�B�'$�H+��Xp��~dl
����^���T�4t��8e+�!?p��"(y6h�|"��A�(��=q$Dq?�n	�	�p�)M�
?����A���`�2�{����J%��B�?]��T���2���s��������4�}�%�q����tg� }dw��MS�^qcM7�	|5OKRr�&���j����,�k�����W���X3�h3��I�����{A��Q�y���yIi����b�<�C�"?Nj�!�a���E;
i��S����$�8y��bH��]��T=�������K����.��#�"�H��p��]hp��<9���|��T��V)��2p�����=���G.�qW4��q:�T7�������{.�
���$&������\'AW���50:^���|�K��]���|BM�H
��#M�s��Hs�3���;�)������#���s
�K4����}rq
���b>N%��$b>N�]�0���i�0�	�i�0OR�y�����|o"p��{N��\�yZ�J=Wb����Y�������9�YJ��������L�H)�J�/YB�:s[(�<��!�%-���3��q%kLo��S���p��QIn����R���J9�����H���l����>SZf���Mu�TiP",1��v�		�%.U/�7��=%�������mN5.g��X~�v���
����;��%�T^c��L��O���<V\]�nX�|����o����tM!=������V3���d5���On���p6���m(���P�((��2t_r#5+p����G���L+���p����������0S�f�0h��0s��G\e51�����;3C�PD���i#��|��Ru�bX�
�Qm.?�|��������]����~��o��������s�������:�R�P�T����x�U~+C��Y*�
����"g�{hkf��}��&�O_��|���W7������K'�N~j��Oy���{�����PV6L�g������������}�j����������g�MY_�<����w��7W]�������#���_?�G����������P�K!��X
�R\�S=�Xu�oJ����s=,��3���b��{gr�P
&��������&'�&���q`�����z&��76NrD����?|��T�6���n�>�cE}�<����]N�dEO���cj#�S�\gD����S�R�XE�E���}�ay'�Cj�	�_i����.R�m��>d��j-Q�aT�l���Z%]a��k
��>�"^���q+�t���� \����Q�l�L�������p"9i�N�)n�H�o(�H~�#���2������N�OI�T���Z��i,���b��pyE���u������^��<9���sE���5�\@L�N�r��,O��It����xy�>��*Gee�����[���~+�N����:���p�C~(�����C�(�P�K�C��A���Y~Xx4�u�����~yo�n,?��
e�|rv�����@�<CCY5��s��|h#�;����qv���\
��~�N���y���e]�j.���e�����2�l�5�B)��\�H��4����(���0�-�Ln�s��-�inLs`�[��Hr �-�qn��-mg��
)�Nh��C���N�gc�3�.�$�������k8;
��4���E�m\�"��'���Mv���sH�"�#�o�s���2��Q����E�!\�Y� �:���>:8#�����vd���xPo���(Mikp���{1�
�h*��@L�1�
Df*��@tS��Nb:��T �S�H��Nb<�cS�x�I`����S�u���q��6n�(e9���Q�
�R��.o-�����mz�3v�����J`I��&�u!#_��v�"'#�\�����A;�����R����%���f?0�1rl�XQ�|N���*��5��S��H$�lc��8�@���Z���"T(T���={q�N��O.�mOo��+	�L�[�����&�����U�t�#Y����1l�&��L���7i����M�9\�.�$@����qb�N,����Ti$����&�&7�0m��&Kd��7Y��n�x�&#��1��$p�Xq)i�i�&������b���z�	�dr�1��v�q��r�������.��#3��:����ir�1�Tj�*�M8�!��MnR��KnR�&M��d$K�4����h����,i��M�OFAr�!q����I���&�S�b�$�z�Nx�[�	����S��d��7}[�7i��R�MF(�8���b�&���x�E�]lL���[��`s.�D���7	�G��Dc3i'3�������������D��7!,��(o�&�_%s�N���&���/q�y���Lf�Lo�\	�Kg��&���ez��b�	��-�D9��7%�.�$���������	�NoR���V��{�&��)?���h��M+��r�,Z�q%��&��[�I���S+�x��M����&���b�
���d�E��)�����_~�����c+,������/���K���&�83�{�����������/��?����Ba	�������rx���F��S���}����[A;��5���]wr�1Hk�(I��R�i�K���fn���������O��it,r�������?~���
~py:��xZ�!�)�v8��>��
���o^����l�mQ�D���H�s�j���A���M�����DGF������k�N�bNO�>5k(��J�h���L���zI�w� e��/��8����������-�`�j��� U�B8N��Y1N����Q�@�3[��?\�\��s�.n�p��y�M�_�P\6"������[����Fu<���^�rA��X�
t+������]-�{J'�]=/������e��@��"�N	�T�]V p��)X&)�@�.��V��������/�|9�Ht���%�U;&HeB����/�S��A��2t"@���"]y5<��5�H����fWw����w� ]e�	
C�@�.����f��@���J��%)����������T�24����,Q��UW����s������"�gY_w00���^%p�!5�T��S�Pw��H���6cW2��H�A[%��k�� Sw{8�4k��?�=���$R|�T
]�6�K3��w����}����j���y7xR�v��o����>�g/n�C_��4�k f(��IQ0aTH�����m��g����,��/3� ��M	�G����#3�x
�+%y���""�J��D��Yl��5������&�y:�������!Ib@D�DJ"����X��;j����4��}X�=E�P��9�s������t���y?\mvI���['�H'�k�W`��M�F9:S�"�����Y�\��
�M���6�4%���0t����5���p9c����AS�+������8+������kq��^�
�e�@u�g��6C��
\\�t =�0���6���M������)c��������Xohf�o��Fwl?`��x��I�s:��4�0��*�5�3v/QP>yE� �Q�O�PF:e��+dz����Y6\b�����W��R��V6�w��I���o�K�P�b+%%���-:W��W�-9~�z4���2T8`�����4���'4�M>�������>���>6�;��[��
<�\YC0�wY�^(�N�Y�����Z������:���X`�p
��sr(��Am���zzY���;���X��u�����IAf{��*��Y�2��@�o�Ow��K�3�9�����B�F�Q��6��Ky���.�|��;��2���>�x��W)�II�I]EG����GU��'�S�q�hT<������Tz+h��!i�4�I�&0��s���^=JV�?S�O��4��������v�v���+���T�b������N����'0�c��\0���������m	#�'G/�/�/:)L�<��V!����������l�:�G�V�f!fw�Uv��E1jk6V�Rtce��;/s]Zi��lL�G'Er91Q(�5�|da��r#'��m/O�����+����-L]Y�s�c���>�����jp�V�0uf�D��a{y����m7����N��v�x�#m��\
D(����������=����lC)R��Iq������ZJ���#���*�?����<XW�*��3hk����{��u�j��'Q� V9�h��bU����������g�&^��1A,F�b|���l�U�.���1DPNO�������e��j����Y

*J}�d��X7�B)�TJ���H ��k���{���� �d|b�V!-�j���,s��q��M����3�^�2���V�Wg��������Z����r����
W������������6�SpL#��]�
�G�J��_�l�����y���{�����7DxE������_��W�<��[��B����\�U�L��xej��H��s����q���VL������P�b���b
�U�Q9!&�J�H��L(��mp��9�H+��{=g=g����2q������[����2\����k�q��?=]�7��w�(pF�[��K�-���-�~�)���8n�TI������_���P�Y�E\6��/�8�a�/���j�����E����C��pV�F������MF���h���_���aD�?��:�U��T`���H%=W3�Q��`9C��Q����V����������\�pr����N����8�]�>�"�Q`�9)}U�C��h�88T�/>��9A���V��o�����o�����m4}����c��5��})�g�_�<����$�M�@���z,$�i_��l��l�x9����O��k�w�#�"�F�s��(�JRR���_���k��h�Z�5��\��r��	��������R���eM=���x���Kc�\�*t��9�]��
v�����='[v�!;z���b��]
�7{6�=ik�(�)Jy�����kk��Wn]N��]zR��U�c)��6������
�#��	m�(:^�K����Z��W��w�i>�-��3b	�R97� �
�@SM��s�(�����77�d�-����vHv�DiuI���@y),�>�!&��=�+��\���8A�&��g\�MT��E�'����i\������`��gv!���V�����[��&�����A{s�&�}*�MG�D*D�VZ�[-���c"�O-���A�A{cg�="�y���R2#��~�L����M@ f�7�Tb
�R����W�����������;Z���7��6z�R�0�+�t�A62
��6�����h6�#�4�D�)^�.I5>��(����lGUt��`�X#�����1����i8nGE���bW
����"�Wn�<���J6�PZ��p������a��8HD����0�Z�����C��2�\���zn[����8�������u{
I���2�Bp�E��m�?���Z��a�l��qU�rdSrM�5J��E���l�Bf�&��*p�l���#MC9���
0�;�s�&Q^DG6MS�A(�n��'����{�t�&\�:]PC��*N��5��Hq�)
�
6I���+�pqd�l0N�
6;�i>��B+#��L'1����e����?������-�j�nR��$"�t�F��\k>H�����/E6�\Zi	
�PhD�$Go�F�F8�Gw�33F+A�?HR��]����K��4i	���#,Q�T5��B���T��P�S(t�_%vP	}sA��"�G�K+�2�����)k�G�m���������?
&D��w�c?�
QKR=5
`2z�\�q� ����vkk�9��J��O�K���&\d9����q�	�����n�X���Q��7t�L�
�J������u����
D��9��*�RrN����������!,�����#WH����(����8�t��w�)J��a��������.G���Y���eSb�!���4O���pqG,�x��2��s����&�����}�.����P�.�2���X����u�R�&���K���rj��F�������xD�������5��S����#1��/%���1f�C+�<���"��d��2�R)������k7D�D��Y6� ��lI#�=��b����E)�r�����G�d��BH�*��F��v� ?G���P	j�d���c��\�=� >��;G�~�z��Rp���v��������p �tK�}EBE O������ti��P;���f1r����O���"�G�],9��]a��h����������Gr���������D��^\���c+�+���wJv;6#���o
������P��������L���XR�O�/����e�����=�sI�Z$��(-�A���=�����S6���K�L�'9
��:���b�� ��&rp6�[��-�xbg�i��j�m(�A7�mw<���������y����~5����wz�������a�(w�q(��oRP�c<�uFs(�����E�R\������U�o\������^�8~��������Y�VN��H�K��1��
�p�+�������B�X}�4�����Nxy���,��O�i*��Z�N��W�rK%;PO�P�U��9����M)T:�m�y�����7l5X�0F�<,C�2d�2\�wOzs�M��������F�	�����h<�I:��u(e����n���Z�e_
��#��k����M41>p�
�l&�M���rJ�L�3��]��9pJ��\;�|VuQ?,Ya���)�(�4�U�I�4��lq��)�����?c3q��56����[�A;�@_�6�C1���b���5u1���ko���nA��SJ�9�C)�s�`�|�l�Y�%�����{p4b���`k�508u�W`7w��pv�rM��[�I�7�,��g���������w������u��r�$�Y���t�;����P�s
�:E���y�����3sM���"���I�����FRg�lW7B3��.\P�>�����l��;a�+����*|�"�N$I@������
!��o@�DZA
�UE:��@�tt��/�e�����9>������5�����_�$���@l/���A��g�rS���&���R9"3��d)!Y����A]�Ld+x�=�B�<�@����5i0LW2����w/��p��
�gV�O1fuC{��p`
}�u�oRyC9C��1��w)mO/i��?�b{���(��{��|&�}a�c�?�����3��w��~{�[����_��g��X0���x����E�hIY��@��D�~���w8~o�����g�������>��P��P��M,k�W	� a��Y��tb��B"#��������M'��)�yc<B�TQJ�d����zr���(P���X�5������t�v�	�'��[���D�E`Q3u��i'Q�
��^3\��tB	6<�9!�l�Kc��U�p(��RW�Gk���U��\�PI��i��G�h_�D��}�C���}�����9h����������;'e�KA����W��6��4�Y=,]V`8Uov���.�*&��F�M���HWx$=��O�Q��r����G$���e
�V"�rQ?�K3]�	�����<@��g|"�n�rRG�T�q�06�H���L�Z�4)���:� \uF��KTN�4�B�;����i�H�GL��]���L�0���!A���4{�_Ky�����v����n������hT:�������z�����u�uFO��	j���m#���
���m����������{- ;rk������k�Q���Y���\JdQI����,*	��m�����,jWBp��
^���I������~��'�'o�����3e|��<�����|,Ii�?�H���}���'^*�?���Q���e������~>�}��2����zYOe���q]Lp��9�+��@c9.q��i`�H�s�K��?���}u���e�N�,�B�����'oQJ�^uBX���z�N��]�����3<+h������i����[+�V(%�����P�$qH���n�DZ�(�Y-Q�o4j�eY^q���)
�7���"����-/�m�^��	�5�
�:����d&��{���_�0b����Zh����o��W�4�p��_[�2�5�Yj.Fr2k3a�����s�s\���d�*����hK)W;�#���p���e�Y�ia�Hxl_}(Qc����"�5m�Y�������Za�
|��ix��}-M�&��WE5N���YN�Uf������:3h=��=��*J�Sm�	h���$�(��c�	0hH��1���8!�nZ�C8
s��E
w~7�����//������>�&�2�)�����hCh�\d� �^��r:������8|s�D�s��v@���j
�ZgQ�w:��"����x'a���	1���[��H]��jed���
A�W7�w��_c�.��y�b����=��
D���6S�������F����m8
��U��#����<���,�h4�y{���o��r�=:�<:���4��K�������a�\]%�`���IE�I������6��3��w_�������(�$����a����\Kz��������$�1�\n�9H
��/�p\EqQ�$y�n���M<o�����w�7}�\X�"��w���}��u��c����P��������lF��m�]
�L�|�`}�����+e9�6#�-k&��M�|�ZC����DZD#�mo^���1�F�ou�Q�����D���xT�>{��J+�wO�`��@q�YgN����5������_*w
���9�Ew�SI��mTR'\����`���tJ��Qj���_"G���m�n�-�f��D�N��I�$�t��T������(L���j7�d��	%o��9c�;X������b��W����&L���r:�gqm)�|��6�w:����,���~:9Vm��e�Z�)�LD����j?������(,���uP��rp@��bb�WN�$th����!����L�3�tq3_��c@�?z��������?��������E~p�����1m�W�Q���@�!�C3����J=�\�����?���������~<� _N�x���8�������
D���Y������9���|�����T�h59���}��I�%�����Z��C{�\��|�?,G7����(\�
5�bfl �
��P�/^���	�����<����b�#F���H�+�%���[�v�VI����?:pl��r�C��/��������7���y�r���7!(��Hg�;6�
u|���"<�+f�u��d�X��sVwZ`I�$7th�Wk����XJ"�?O\�E��V��O����~�jw����?���P�6��V8tKQ(mO0
C��+#]��r�'��G�Rq��j��2�s&���>�{r>��TS����(�!F�/�]h`)��eLc#}��t��nwy�c���|m6�|K����x8������-��w�'�Z��c-�d������|���z�-���������xR�C�,������b	���
T7���g_&�i����q����1K����/�Tc�O��mZ���p8Vw����e��p�	+����@L���q�������a�fr���������]�ax�Z8u�F���
(�M�`Q(�K�`-V	�����-��m9
���2J}��5���z�����c{�����u��^�������3�M����O���!y��\~8*�f]k�fz=��
��#C�P��1�GH�(F����|�'#���<.�`�KGU-��xfu2<��A��^��1a/����on
�����vu��;�;P4�w������G��O��_=e"9���g������
W�(�!a8��\�!zo��)�x�����D�������7(R���r���q�k���9`��~+��(_g�m�~�-�Yh�������9gal��X��;Sga�iELF
;#�9�]-e����+�D�d�.n��dw���tO�FC	��p�5��<S���+��������m�iZ9b�+��(�5�k�����=�^����-TQ���O��C���n*�S��n'�/����b7yU���~� ���h-8�GY+�d8pJ�bH$��7�H�����X��>�j!ck!�f9n&���Zoi��bm��+����he�~6��[P���6�i<��NPA	�'��g)��H&�q�Cb�������Y`rqW�
���r7����<��\��D�!�lDH�8�B��8WkPv����K�E2��_��n��d�yg�#rDNlt�j��e��y?�L�X�&uW��?�0������@>�tL50>�}`x���R(���9���vM�k�1��c��>~��g��>+�f��.	:�D�\���&
��G�)��'�}��a���b��,�}�6�����������3�Hx~��-�x�Ih���?^~�6�?��	vU����������w&����|���/��y*9{*���$�(�PI��K���|�14���_z�u�1�����a����R'���}b�y��d��I|G�Z7�GJ����R`�j�6�s.����
���d��,��|��q��`��R����8`��mRkv�
����Y� .hb�R�����]`���Qmj�����,����FLSn���bNnl�A�k�bl�FS�Zw�NY2����\�m���!
���s] N�C�%_���?i�'�!�o�e���W���"{����
�\I5�$������E�9�"�"
������hxzm��H��Ci�-&�>�f*
b�M��N��=��I���8`�U�3���dh��Pz�Y�g7��������z���hi����;TzG�#�*�����m?��FY��C��XJ�3D�����C��� k�iT���2DZHng��kM���k��F�J�!�_��J4����F�#�5��5�U�0&�����w���@���5eUn��;�KC���<�����]x�r�$�����u��y�WR���}w�%�61�����������5��+�`�}�C�Q�!�e���0i�^;9^f�3(�TrZS��S���~���?���y���F�i?�1�|�p��_��+�����_��W�uy�
�V-��/��|���6�S���s�X?����w4z���������������])P��'���&��	��(�A�xp� @��gT�o����������n��-������}�p����m7�/�g����LE���[�r;�R�@\_6���qt1��F�AY>�>g�u��`�FpsN_��t������n}�Y��/��It���~4�$c��$�E��8&�����sc�j0
C.�������^d�v�7,B�q���4Q�}d�A��������u��\��Jc@�H�Z��h��8����IV�q�2��8��x�&\l��q0
��������[�V�������>��9�$��� ���'�����/���A�F����=w��r�w�zo�2sz�E�o��T�[��Ipf�R�d�1���$�1�g+�Ey��u!�)��S��n�
�r�n9�~Z*�f�7��b}�}��+���%K%}n��ET�r�&�*L�-]��F#nv	���r��69��_-cf������&�]��U�}})�����mD�8u>�*����x��h 6�R���
�U��3q8��`�,�}	���F�F����@8H�W������[u��.�)q.`���i��
G�Q1A�rcw�gg�m��z��P��J����J��Tg/H�����A��i������zA|Z"�P��z�����yo���?�y���Oo���s�������w�����<���BR��77�A$����D�9,����.�ems������7�O��W_��W�v���_����n�5���t:�����_��A-�2��&�)%�C$��t�Z��|������3���P�r��/�r�N	�w��s�|#��*�&�Rf���1�WgT�&�#5����&W�F���$jnAot)���	�[�L �]Gc&�X���JK�MoP$��\y!�8���t���NzXJ��m/��2�/wu}��ny�%�r7��#h��u��K@��y�p;!�:~#R.��A
�L-�[�c������]�+S0!��x:��L�[[M��ovAwbC��Y�
���
4;TH�#n�*��n+.����4���]�k5�������J)��H���j����n����������o�ws����
?xv����@�����o�}C���L]G&~9K�B��!��|L�c����q�W����6�5�(m�1D:@����s�z,�S�-��[i���j�����S���&�*�����y+i���Ro#����K]q"���a0��������<����T�	�
�z���M�<~X���"�	�����	���h�J���)���j�,5�y��5bZ��{^/���iq���d��Q6�T��1�]�w�h����I���b|���Y<�/����T%������M���{�e;�1Q��J��������������iA��H��/Y@g���p�;/��V�l��Tn���(��\�DN�-fXpx*���JQ�Ms�a��T(y��x��hy�f���n�pI���m���&��%��M��.�kt]���@������%�
Ny�7\��b�9��Ko�%�������H3�8�MD��br�e��2D��4����t�"��������@����������r����p���[�2iAQ{��,�h���U���N�m��c�:����:u�������\�xu
3��1���I�	v��4hug����R;������% 	�lB#���={G'A:��G�(�3)���o���F�"�q}Y�NGR�_8�����H,Po����`R%���O��D
���_��uS(�d^(:OB����y:�q�.���7!�:��Tr\:RJ��Y�g��}Z5�3:�D������E(Zc�A�������-k����5m�����no��S���N��R�+�j�U(�U�IW+��%r�Z��E��+�����%�?{}K	�TE��l����a���"����6\�A#�������;1�p���:	��NL-d�K:���xMtN����v��e����U]3�a:sx��2�����)�^$��M���ZQ0���!���$��b�)on��Va��y�X���%�2�N���r�>nC9q��R�gn���R�����!���+,����/��L�������c>��q��{2�������i%6���UK���;�?��h*4~��Jv=q���uS���{�����Zc�{a�t����E�&D��G��SL�<����99���?���H�� ���0w(����<�������	�~�l-8M��
<�r4���g�\�B����GF��O���/~�I�}tV
J�H'`\(W�P��`�+���*r��0��g(Fn�{jw�I�{o����l����d��C'�(�st����eQi��W9�	��'a�O.�	�q�Il��?�,���-�*��F����}J��Y�����*��p�x�,&S���M��fa]��d��9�V�@
x���Zs_�g(/��3[B��%��^G���u�K����V��T�{#OTM$^���_�D*!b�~�Gbh�d�^~w�G�����S���E�� m�
�.-�����BKxh���e����8�����3���A(%#��&.p��2����e��gy�������L8��U��A�����E�'������bw=�j���KlD�����5s1�M0vDP���}��Cg��U�>%�����k���9��5&�_�dw(n?\
(��j��Bv�^�2����m,o@R�R����08.��M�s��IE.?��$O���}S~�%���!�%9q%Z������Z2n2NwCK*GL�\i-U���b�/��u$?|����	k������~�dtG������mcA��0|�%6)c��!n�������7\:�!�Q�����e1�K���"�5il���0�s����7~g���&�;� �3���W�����u������,��ll�P��u({x%v�������VeU�E���@�r����z@!�G���(qo7��z�D:y��.��@w��EO)�J���]H���wu��<�N�g>���Z���9`�]�����;���r����b���EUB�w@�gVs��Iv.�d��2����R�(��/��7�v�Pdb-���S���f��3Fa�d��hZv��tGQH�ZFG�%0��fR��Z��<�k�1��j���P[]�icv��y��q��o"s������,���64���]l��'dgr��1|����j������G+F���q�p�p��vO�w���+dz�7������	e���$��N
�^�����P�B���QRw�t�0�Q�b���o#C�Uk���S�6`:O���Cn��B����N����/��d' ���cS�1���N=	5��G�]&!������%��(G�4r7?o��w��*3lhg�8�;�0&�����w��T���`h�qa���}R�f��Os!��	\�
��v��w�K����L*|'����8�3�E�^�P3�?%,����>�&.�@�
z���N�]�b��'���o�X=��q����M�\�p�p���F�$��t`"����*A�s�#a'�+��
��2�Y(���~C�8�~1n��N��///�������>� 0k���e�-�/I!�����Pc�;��|u��q�h�k���5�u��'����,�3�{���<�f0����A���C���<���m�mh�9 ��6���V��P�v#;���&�H����x��)�B�i9������g���� �+�	��l�0���g7G4fj�h�$>���e�3�������`�(��A|����jb���%�z�;�N lM�����.w@[K��"W�&����\P^��a]����������6Y/�p��beZ���U;�+Ex;�����
tPQ�C�B:
����:+UK{'�"�>��=���Z_�\������[?�����'��r��f���3�?�?�a���O	��?������(���������{S&1� HIO5����u�$�$7��$-�h�]�E��B��'��I��4�B����[��d.��h�����[�r_���~u������bL&��O> �}��n���������:�d���-����8���~#�*\�U�ZM�^����s�]h~R�.�CV��	u�����\�
����9B�n�'����%�v�2i����Q� #�l��7J
������������U�|���mz�����GL?n2bRB:�I�cG�B�sc�h@M��w�����h�^������rp��2�����qm�(��G�?y��OA�����/?������a|`�o���/?���Ah��7__&���/�s�#��;G�(�e!��4�an(��������q��*�(�4�Pe�NEwH,)����YM#7�m�T�N��������_37�-���,��O��s����1T'/������
��ff!Ht�&��61�X�$$��g�T���1���de����G��R��vg_�4�~��Y�P����Vh������\Hk������_�^�M6u�z�f{���sh$��r�W�m'�a4��*�=�]-%��������L�DB�b-S���j�t�?(�r��O�:t�����V��$P�JjnES�>�<J�����"���uR���d��$�P��\l����q�������X���r|�����6�-.��9�{������O<K�7R�G|���
vP���>c����|�������B�	�+5���Q=�_aB=u�]Ti����z���G�Ls%�z�FE�����07_�
�?"�2��^�����(a�!�QP�w{�������y��h����e������|TJ�c�����U�RJK�	D_���q�my4eI��j�:F�����������M��5K�@���/�A
��i4�9��wY��&�& ��h����(JU,e�&�q�U���
��Z4�Q���}c�\�]01�s�AX� H���s*�e����zV��B\7���f�[	?��!|�Y�3I��qcl�d�X��"�"m�>���rH2���~7��
>; �P��wB)����8a2�P1��S^�>Y�!�E�[���m6����Y)�V�{]��|����oO����.�{�1T����!�����a�U�F� z��m\g�bK+���������i}3;'A`+�����n�Z/<?�k��T�P����x:d_<c?`c1��@�I5��,[`C��y�u?����C~����=�(��)"�v�wS���AI!C"��2�^��g���1�8����}^�������$�h5��zLZ)|
{�I$B`��N(��Qa��F���3H'��-�"�w���1������:���%iUk�����@(u�W@��>�N���������."tQ����J��FY��m�.c�s:���e��tX(�=��oo�^��y�<����=�l?i'�j?����A0��0�)�v�W��G+�00G,C�8�MD�`Gv��}�������vq2v0�du�u<3gF56�!��Y��k:���iCo�'�x��:Q��O
���9.�[$�Xn��XD�XK���I8'�Sm�!�j���N��h�f��u;/[��f�"���?��@?`c����
�3B���Q@B)��n��8^�Y��2J���bY4�T��h2�.�ES������y7�VFT�3��-#�R�1��mX��<�J���3�*9�����L(K�~���� �B)��/@P� �W�?����6jli���8����dB��#�Q:��|�g���^���}f�o(�T#]��{�����]
���>e��v������^[��:���ZQ�*|�UO��T������S���G����r
W ���`��DsO�����FC�S�A%�SgK�#P-�w{����[�����$I��:���z`��m��Q�^�n\��
���-u4.�	����m���8'�J�N/��<v6a�i�<D/J�@�\�BYg��q!J�]}�w�
'x��d�Ht%���p���Wj":~�kBw��)l�d���?)���p@�t�:gE8T2/���N�+�1�#*����c����.'�wY����k^�x,	�^�P*�x�N�������|���*�=�����R^�W=k>?�	�(7u/��Y��+D4M���?j�Q�4R� ��R�wT��������]����o�������As{�����=�����\�74�.?^�w<��N�[	�@iI'��on���X�cy,���<���X�cy,���($5��
#37Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#36)
2 attachment(s)
Re: Gather Merge

Here are the latest tpch run on top of commit
93e6e40574bccf9c6f33c520a4189d3e98e2fd1f
(which includes the parallel index scan commit).

Settings:

work_mem = 64MB
max_parallel_workers_per_gather = 4
tpch sf = 20

Query picking gather merge path:

Query 2: 17678.570 - 16766.051
Query 3: 44357.977 - 44001.607
Query 4: 7763.992 - 7100.267
Query 5: 21828.874 - 21437.217
Query 12: 19067.318 - 20218.332
Query 17: 113895.084 - 104935.094
Query 18: 230650.193 - 191607.031

(attaching queries output file).

When work_mem is higher, tpch query choose the hash aggregate plan. In
some of the query if I force gather merge with higher work_mem setting
results
with GM are much better (example: query 9). It seems something wrong
with the sort or hashaggregate costing due to which planner is unable to
pick
GM in some cases (thats something need more investigation, other then this
thread).

Here are some of the other queries which performs 2x faster better with
gather merge, even with the higher work_mem settings.

Example:

postgres=# show work_mem ;
work_mem
----------
128MB
(1 row)

postgres=# show max_parallel_workers_per_gather ;
max_parallel_workers_per_gather
---------------------------------
4
(1 row)

postgres=# explain analyze select * from customer, orders where o_custkey =
c_custkey order by c_name;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=391019.23..929812.42 rows=4499894 width=274) (actual
time=21958.057..33453.440 rows=4500000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=390019.17..392831.61 rows=1124974 width=274) (actual
time=21023.906..22476.398 rows=900000 loops=5)
Sort Key: customer.c_name
Sort Method: external merge Disk: 270000kB
-> Hash Join (cost=21245.00..130833.13 rows=1124974 width=274)
(actual time=442.298..3300.924 rows=900000 loops=5)
Hash Cond: (orders.o_custkey = customer.c_custkey)
-> Parallel Seq Scan on orders (cost=0.00..94119.74
rows=1124974 width=111) (actual time=0.066..1026.268 rows=900000 loops=5)
-> Hash (cost=15620.00..15620.00 rows=450000 width=163)
(actual time=436.946..436.946 rows=450000 loops=5)
Buckets: 524288 Batches: 1 Memory Usage: 91930kB
-> Seq Scan on customer (cost=0.00..15620.00
rows=450000 width=163) (actual time=0.041..95.679 rows=450000 loops=5)
Planning time: 1.698 ms
Execution time: 33866.866 ms

postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from customer, orders where o_custkey =
c_custkey order by c_name;
QUERY
PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=1292720.11..1303969.84 rows=4499894 width=274) (actual
time=62937.054..70417.760 rows=4500000 loops=1)
Sort Key: customer.c_name
Sort Method: external merge Disk: 1298616kB
-> Hash Join (cost=21245.00..210987.48 rows=4499894 width=274) (actual
time=390.660..7373.668 rows=4500000 loops=1)
Hash Cond: (orders.o_custkey = customer.c_custkey)
-> Seq Scan on orders (cost=0.00..127868.94 rows=4499894
width=111) (actual time=0.120..1386.200 rows=4500000 loops=1)
-> Hash (cost=15620.00..15620.00 rows=450000 width=163) (actual
time=389.610..389.610 rows=450000 loops=1)
Buckets: 524288 Batches: 1 Memory Usage: 91930kB
-> Seq Scan on customer (cost=0.00..15620.00 rows=450000
width=163) (actual time=0.016..85.376 rows=450000 loops=1)
Planning time: 1.155 ms
Execution time: 70869.090 ms
(11 rows)

-- Force parallel sequential scan.
postgres=# set parallel_tuple_cost = 0.01;
SET
postgres=# explain analyze select * from customer, orders where o_custkey =
c_custkey order by c_name;
QUERY
PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=1258564.69..1269814.43 rows=4499894 width=274) (actual
time=59070.986..66452.565 rows=4500000 loops=1)
Sort Key: customer.c_name
Sort Method: external merge Disk: 1298600kB
-> Gather (cost=22245.00..176832.07 rows=4499894 width=274) (actual
time=353.397..3914.851 rows=4500000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=21245.00..130833.13 rows=1124974 width=274)
(actual time=358.574..2004.654 rows=900000 loops=5)
Hash Cond: (orders.o_custkey = customer.c_custkey)
-> Parallel Seq Scan on orders (cost=0.00..94119.74
rows=1124974 width=111) (actual time=0.096..293.176 rows=900000 loops=5)
-> Hash (cost=15620.00..15620.00 rows=450000 width=163)
(actual time=356.567..356.567 rows=450000 loops=5)
Buckets: 524288 Batches: 1 Memory Usage: 91930kB
-> Seq Scan on customer (cost=0.00..15620.00
rows=450000 width=163) (actual time=0.038..88.918 rows=450000 loops=5)
Planning time: 0.768 ms
Execution time: 66871.398 ms
(14 rows)

Another query:

postgres=# explain analyze select * from pgbench_accounts where filler like
'%foo%' order by aid;
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=47108.00..70432.79 rows=194804 width=97) (actual
time=267.708..397.309 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (cost=46107.94..46229.69 rows=48701 width=97) (actual
time=260.969..268.848 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 6861kB
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..42316.16
rows=48701 width=97) (actual time=210.499..225.161 rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.120 ms
Execution time: 412.632 ms
(11 rows)

postgres=# set enable_gathermerge = off;
SET
postgres=# explain analyze select * from pgbench_accounts where filler like
'%foo%' order by aid;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------
Sort (cost=78181.90..78668.91 rows=194805 width=97) (actual
time=905.688..929.926 rows=200000 loops=1)
Sort Key: aid
Sort Method: quicksort Memory: 35832kB
-> Seq Scan on pgbench_accounts (cost=0.00..61066.65 rows=194805
width=97) (actual time=772.789..835.104 rows=200000 loops=1)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 1800000
Planning time: 0.151 ms
Execution time: 943.824 ms
(8 rows)

I think that with some of the other parallel operator patches like parallel
bitmap scan, parallel hash join, etc., GM will get pick more often into tpch
queries.

Regards,

On Mon, Feb 6, 2017 at 2:41 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Thanks Neha for the test LCOV report.

I run the tpch on scale 10 with the latest patch and with latest code
up to 1st Feb (f1169ab501ce90e035a7c6489013a1d4c250ac92).

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- power2 machine with 512GB of RAM

Here are the results: I did the three run and taken median. First
timing is without patch and 2nd is with GM.

Query 3: 45035.425 - 43935.497
Query 4: 7098.259 - 6651.498
Query 5: 37114.338 - 37605.579
Query 9: 87544.144 - 44617.138
Query 10: 43810.497 - 37133.404
Query 12: 20309.993 - 19639.213
Query 15: 61837.415 - 60240.762
Query 17: 134121.961 - 116943.542
Query 18: 248157.735 - 193463.311
Query 20: 203448.405 - 166733.112

Also attaching the output of TPCH runs.

On Fri, Feb 3, 2017 at 5:56 PM, Neha Sharma <neha.sharma@enterprisedb.com>
wrote:

Hi,

I have done some testing with the latest patch

1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;

*Machine Configuration :-*
RAM :- 16GB
VCPU :- 8
Disk :- 640 GB

Test case script with out-file attached.

*LCOV Report :- *

File Names Line Coverage without Test cases Line Coverage with Test cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %

On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Due to recent below commit, patch not getting apply cleanly on
master branch.

commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Mon Jan 30 17:15:42 2017 -0500

Add a regression test script dedicated to exercising system views.

Please find attached latest patch.

On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lathia@gmail.com

wrote:

I am sorry for the delay, here is the latest re-based patch.

my colleague Neha Sharma, reported one regression with the patch, where
explain output for the Sort node under GatherMerge was always showing
cost as zero:

explain analyze select '' AS "xxx" from pgbench_accounts where filler
like '%foo%' order by aid;

QUERY PLAN

------------------------------------------------------------
------------------------------------------------------------
------------------------
Gather Merge (cost=47169.81..70839.91 rows=197688 width=36) (actual
time=406.297..653.572 rows=200000 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Sort (*cost=0.00..0.00 rows=0 width=0*) (actual
time=368.945..391.124 rows=40000 loops=5)
Sort Key: aid
Sort Method: quicksort Memory: 3423kB
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..42316.60 rows=49422 width=36) (actual time=296.612..338.873
rows=40000 loops=5)
Filter: (filler ~~ '%foo%'::text)
Rows Removed by Filter: 360000
Planning time: 0.184 ms
Execution time: 734.963 ms

This patch also fix that issue.

On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
michael.paquier@gmail.com> wrote:

On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
<kuntalghosh.2007@gmail.com> wrote:

On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.

Is an update going to be provided? I have moved this patch to next CF
with "waiting on author" as status.
--
Michael

--
Rushabh Lathia

--
Rushabh Lathia

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--

Regards,

Neha Sharma

--
Rushabh Lathia

--
Rushabh Lathia

Attachments:

without_gm.tar.gzapplication/x-gzip; name=without_gm.tar.gzDownload
with_gm.tar.gzapplication/x-gzip; name=with_gm.tar.gzDownload
#38Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Rushabh Lathia (#33)
Re: Gather Merge

On Thu, Feb 2, 2017 at 2:32 AM, Rushabh Lathia <rushabh.lathia@gmail.com> wrote:

Please find attached latest patch.

The latest patch still applies (with some fuzz), builds and the
regression tests pass.

I see that Robert made a number of changes and posted a v6 along with
some numbers which he described as lacklustre, but then fixed a row
estimate problem which was discouraging parallel joins (commit
0c2070ce). Rushabh posted a v7 and test results which look good. As
far as I can see there are no outstanding issues or unhandled review
feedback. I've had a fresh read through of the latest version and
have no further comments myself.

I've set this to ready-for-committer now. If I've misunderstood and
there are still unresolved issues from that earlier email exchange or
someone else wants to post a review or objection, then of course
please feel free to set it back.

BTW There is no regression test supplied. I see that commit 5262f7a4
adding parallel index scans put simple explain output in
"select_parallel" to demonstrate the new kind of plan being created;
perhaps this patch should do the same? I know it wouldn't really test
much of the code but it's at least something. Perhaps you could post
a new version with that?

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39Amit Kapila
amit.kapila16@gmail.com
In reply to: Thomas Munro (#38)
Re: Gather Merge

On Fri, Feb 17, 2017 at 3:59 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

On Thu, Feb 2, 2017 at 2:32 AM, Rushabh Lathia <rushabh.lathia@gmail.com> wrote:

Please find attached latest patch.

The latest patch still applies (with some fuzz), builds and the
regression tests pass.

I see that Robert made a number of changes and posted a v6 along with
some numbers which he described as lacklustre, but then fixed a row
estimate problem which was discouraging parallel joins (commit
0c2070ce). Rushabh posted a v7 and test results which look good.

Are you suggesting that commit 0c2070ce has helped to improve
performance if so, I don't think that has been proved? I guess the
numbers are different either due to different m/c or some other
settings like scale factor or work_mem.

As
far as I can see there are no outstanding issues or unhandled review
feedback. I've had a fresh read through of the latest version and
have no further comments myself.

I've set this to ready-for-committer now. If I've misunderstood and
there are still unresolved issues from that earlier email exchange or
someone else wants to post a review or objection, then of course
please feel free to set it back.

BTW There is no regression test supplied. I see that commit 5262f7a4
adding parallel index scans put simple explain output in
"select_parallel" to demonstrate the new kind of plan being created;

It has added both explain statement test and a test to exercise
parallel index scan code.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#40Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Thomas Munro (#38)
1 attachment(s)
Re: Gather Merge

On Fri, Feb 17, 2017 at 3:59 PM, Thomas Munro <thomas.munro@enterprisedb.com

wrote:

On Thu, Feb 2, 2017 at 2:32 AM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Please find attached latest patch.

The latest patch still applies (with some fuzz), builds and the
regression tests pass.

Attached latest patch, which applies cleanly on latest source.

I see that Robert made a number of changes and posted a v6 along with
some numbers which he described as lacklustre, but then fixed a row
estimate problem which was discouraging parallel joins (commit
0c2070ce). Rushabh posted a v7 and test results which look good. As
far as I can see there are no outstanding issues or unhandled review
feedback. I've had a fresh read through of the latest version and
have no further comments myself.

I've set this to ready-for-committer now. If I've misunderstood and
there are still unresolved issues from that earlier email exchange or
someone else wants to post a review or objection, then of course
please feel free to set it back.

Thanks Thomas.

BTW There is no regression test supplied. I see that commit 5262f7a4
adding parallel index scans put simple explain output in
"select_parallel" to demonstrate the new kind of plan being created;
perhaps this patch should do the same? I know it wouldn't really test
much of the code but it's at least something. Perhaps you could post
a new version with that?

Added the regression test to the new version of patch.

PFA latest patch.

--
Rushabh Lathia
www.EnterpriseDB.com

Attachments:

gather-merge-v8.patchapplication/x-download; name=gather-merge-v8.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 95afc2c..e7dbbff 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3496,6 +3496,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+      <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>enable_gathermerge</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Enables or disables the query planner's use of gather
+        merge plan types. The default is <literal>on</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
       <term><varname>enable_hashagg</varname> (<type>boolean</type>)
       <indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c9e0a3e..0bcee3f 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -905,6 +905,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1394,6 +1397,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 2a2b7eb..c95747e 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
        nodeCustom.o nodeFunctionscan.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 0dd95c6..f00496b 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
 #include "executor/nodeForeignscan.h"
 #include "executor/nodeFunctionscan.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeGroup.h"
 #include "executor/nodeHash.h"
 #include "executor/nodeHashjoin.h"
@@ -320,6 +321,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -525,6 +531,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -687,6 +697,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -820,6 +834,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..84c1677
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *		Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+				  bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+					  bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys =
+			palloc0(sizeof(SortSupportData) * node->numCols);
+
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	ExprContext *econtext;
+	int			i;
+
+	/*
+	 * As with Gather, we don't launch workers until this node is actually
+	 * executed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize data structures for workers. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/* Try to launch workers. */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader = palloc(pcxt->nworkers_launched *
+									  sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/*
+	 * Get next tuple, either from one of our workers, or by running the
+	 * plan ourselves.
+	 */
+	slot = gather_merge_getnext(node);
+	if (TupIsNull(slot))
+		return NULL;
+
+	/*
+	 * form the result tuple using ExecProject(), and return it --- unless
+	 * the projection produces an empty set, in which case we must loop
+	 * back around for another tuple
+	 */
+	econtext->ecxt_outertuple = slot;
+	return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+										(gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the merge */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+											heap_compare_slots,
+											gm_state);
+
+	/*
+	 * First, try to read a tuple from each worker (including leader) in
+	 * nowait mode, so that we initialize read from each worker as well as
+	 * leader. After this, if all active workers are unable to produce a
+	 * tuple, then re-read and this time use wait mode. For workers that were
+	 * able to produce a tuple in the earlier loop and are still active, just
+	 * try to fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	GMReaderTupleBuffer *tuple_buffer;
+	HeapTuple	tup = NULL;
+
+	/*
+	 * If we're being asked to generate a tuple from the leader, then we
+	 * just call ExecProcNode as normal to produce one.
+	 */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+
+	/* Otherwise, check the state of the relevant tuple buffer. */
+	tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+	if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+	{
+		/* Return any tuple previously read that is still buffered. */
+		tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	else if (tuple_buffer->done)
+	{
+		/* Reader is known to be exhausted. */
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		/* Read and buffer next tuple. */
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * Attempt to read more tuples in nowait mode and store them in
+		 * the tuple array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+				  bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext;
+
+	tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 05d8538..763a27f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -359,6 +359,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4523,6 +4548,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index b3802b4..afb0fc6 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -457,6 +457,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1985,6 +2014,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3410,6 +3450,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3740,6 +3783,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index d2f69fe..c01e741 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2095,6 +2095,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2530,6 +2550,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index eeacf81..38da080 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2047,39 +2047,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
 
 /*
  * generate_gather_paths
- *		Generate parallel access paths for a relation by pushing a Gather on
- *		top of a partial path.
+ *		Generate parallel access paths for a relation by pushing a Gather or
+ *		Gather Merge on top of a partial path.
  *
  * This must not be called until after we're done creating all partial paths
  * for the specified relation.  (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
  */
 void
 generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
+	ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
 		return;
 
 	/*
-	 * The output of Gather is currently always unsorted, so there's only one
-	 * partial path of interest: the cheapest one.  That will be the one at
-	 * the front of partial_pathlist because of the way add_partial_path
-	 * works.
-	 *
-	 * Eventually, we should have a Gather Merge operation that can merge
-	 * multiple tuple streams together while preserving their ordering.  We
-	 * could usefully generate such a path from each partial path that has
-	 * non-NIL pathkeys.
+	 * The output of Gather is always unsorted, so there's only one partial
+	 * path of interest: the cheapest one.  That will be the one at the front
+	 * of partial_pathlist because of the way add_partial_path works.
 	 */
 	cheapest_partial_path = linitial(rel->partial_pathlist);
 	simple_gather_path = (Path *)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
+
+	/*
+	 * For each useful ordering, we can consider an order-preserving Gather
+	 * Merge.
+	 */
+	foreach (lc, rel->partial_pathlist)
+	{
+		Path   *subpath = (Path *) lfirst(lc);
+		GatherMergePath   *path;
+
+		if (subpath->pathkeys == NIL)
+			continue;
+
+		path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+										subpath->pathkeys, NULL, NULL);
+		add_path(rel, &path->path);
+	}
 }
 
 /*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index d01630f..aa81ab4 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Add one to the number of workers to account for the leader.  This might
+	 * be overgenerous since the leader will do less work than other workers
+	 * in typical cases, but we'll go with it for now.
+	 */
+	Assert(path->num_workers > 0);
+	N = (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost.  Since Gather Merge, unlike
+	 * Gather, requires us to block until a tuple is available from every
+	 * worker, we bump the IPC cost up a little bit as compared with Gather.
+	 * For lack of a better idea, charge an extra 5%.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 997bdcf..e08a6c3 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -272,6 +272,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+						 GatherMergePath *best_path);
 
 
 /*
@@ -469,6 +471,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+											  (GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -1439,6 +1445,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
 }
 
 /*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather Merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	/* As with Gather, it's best to project away columns in the workers. */
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	/* See create_merge_append_plan for why there's no make_xxx function */
+	gm_plan = makeNode(GatherMerge);
+	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->num_workers = best_path->num_workers;
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
+	Assert(pathkeys != NIL);
+
+	/* Compute sort column info, and adjust GatherMerge tlist as needed */
+	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+									  best_path->path.parent->relids,
+									  NULL,
+									  true,
+									  &gm_plan->numCols,
+									  &gm_plan->sortColIdx,
+									  &gm_plan->sortOperators,
+									  &gm_plan->collations,
+									  &gm_plan->nullsFirst);
+
+
+	/* Compute sort column info, and adjust subplan's tlist as needed */
+	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+										 best_path->subpath->parent->relids,
+										 gm_plan->sortColIdx,
+										 false,
+										 &numsortkeys,
+										 &sortColIdx,
+										 &sortOperators,
+										 &collations,
+										 &nullsFirst);
+
+	/* As for MergeAppend, check that we got the same sort key information. */
+	Assert(numsortkeys == gm_plan->numCols);
+	if (memcmp(sortColIdx, gm_plan->sortColIdx,
+			   numsortkeys * sizeof(AttrNumber)) != 0)
+		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+	Assert(memcmp(sortOperators, gm_plan->sortOperators,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(collations, gm_plan->collations,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+				  numsortkeys * sizeof(bool)) == 0);
+
+	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+		subplan = (Plan *) make_sort(subplan, numsortkeys,
+									 sortColIdx, sortOperators,
+									 collations, nullsFirst);
+
+	/* Now insert the subplan under GatherMerge. */
+	gm_plan->plan.lefttree = subplan;
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
+
+/*
  * create_projection_plan
  *
  *	  Create a plan tree to do a projection step and (recursively) plans
@@ -2277,7 +2363,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
-
 /*****************************************************************************
  *
  *	BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 3d33d46..c1c9046 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3652,8 +3652,7 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path.  We can do this using either Gather or Gather Merge.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
@@ -3700,6 +3699,70 @@ create_grouping_paths(PlannerInfo *root,
 										   parse->groupClause,
 										   (List *) parse->havingQual,
 										   dNumGroups));
+
+			/*
+			 * The point of using Gather Merge rather than Gather is that it
+			 * can preserve the ordering of the input path, so there's no
+			 * reason to try it unless (1) it's possible to produce more than
+			 * one output row and (2) we want the output path to be ordered.
+			 */
+			if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *subpath = (Path *) lfirst(lc);
+					Path	   *gmpath;
+					double		total_groups;
+
+					/*
+					 * It's useful to consider paths that are already properly
+					 * ordered for Gather Merge, because those don't need a
+					 * sort.  It's also useful to consider the cheapest path,
+					 * because sorting it in parallel and then doing Gather
+					 * Merge may be better than doing an unordered Gather
+					 * followed by a sort.  But there's no point in
+					 * considering non-cheapest paths that aren't already
+					 * sorted correctly.
+					 */
+					if (path != subpath &&
+						!pathkeys_contained_in(root->group_pathkeys,
+											   subpath->pathkeys))
+						continue;
+
+					total_groups = subpath->rows * subpath->parallel_workers;
+
+					gmpath = (Path *)
+						create_gather_merge_path(root,
+												 grouped_rel,
+												 subpath,
+												 NULL,
+												 root->group_pathkeys,
+												 NULL,
+												 &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+								 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								 create_group_path(root,
+												   grouped_rel,
+												   gmpath,
+												   target,
+												   parse->groupClause,
+												   (List *) parse->havingQual,
+												   dNumGroups));
+				}
+			}
 		}
 	}
 
@@ -3797,6 +3860,16 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * We've been using the partial pathlist for the grouped relation to hold
+	 * partially aggregated paths, but that's actually a little bit bogus
+	 * because it's unsafe for later planning stages -- like ordered_rel ---
+	 * to get the idea that they can use these partial paths as if they didn't
+	 * need a FinalizeAggregate step.  Zap the partial pathlist at this stage
+	 * so we don't get confused.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4266,6 +4339,56 @@ create_ordered_paths(PlannerInfo *root,
 	}
 
 	/*
+	 * generate_gather_paths() will have already generated a simple Gather
+	 * path for the best parallel path, if any, and the loop above will have
+	 * considered sorting it.  Similarly, generate_gather_paths() will also
+	 * have generated order-preserving Gather Merge plans which can be used
+	 * without sorting if they happen to match the sort_pathkeys, and the loop
+	 * above will have handled those as well.  However, there's one more
+	 * possibility: it may make sense to sort the cheapest partial path
+	 * according to the required output order and then use Gather Merge.
+	 */
+	if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+		input_rel->partial_pathlist != NIL)
+	{
+		Path	   *cheapest_partial_path;
+
+		cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+		/*
+		 * If cheapest partial path doesn't need a sort, this is redundant
+		 * with what's already been tried.
+		 */
+		if (!pathkeys_contained_in(root->sort_pathkeys,
+								   cheapest_partial_path->pathkeys))
+		{
+			Path	   *path;
+			double		total_groups;
+
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 cheapest_partial_path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+
+			total_groups = cheapest_partial_path->rows *
+				cheapest_partial_path->parallel_workers;
+			path = (Path *)
+				create_gather_merge_path(root, ordered_rel,
+										 path,
+										 target, root->sort_pathkeys, NULL,
+										 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
+	/*
 	 * If there is an FDW that's responsible for all baserels of the query,
 	 * let it consider adding ForeignPaths.
 	 */
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index be267b9..cc1c66e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 7954c44..c82d654 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2695,6 +2695,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 3248296..398e5dd 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1627,6 +1627,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5d8fb2e..92f4463 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -901,6 +901,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 9f41bab..e744d3d 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2010,6 +2010,35 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan which produces sorted output in each worker, and then
+ *		merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;		/* tuple buffer per
+														 * reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 95dd8ba..3530e41 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -76,6 +76,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -125,6 +126,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -246,6 +248,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_ProjectSetPath,
 	T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f72f7a8..8dbce7a 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -785,6 +785,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index f7ac6f6..05d6f07 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1204,6 +1204,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 72200fa..fdb677f 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -200,5 +201,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 53cad24..1f68334 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -77,6 +77,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
 extern GatherPath *create_gather_path(PlannerInfo *root,
 				   RelOptInfo *rel, Path *subpath, PathTarget *target,
 				   Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel,
+												 Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
 						 RelOptInfo *rel, Path *subpath,
 						 List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 48fb80e..a0b7caa 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -148,6 +148,33 @@ select  count((unique1)) from tenk1 where hundred > 1;
 
 reset enable_seqscan;
 reset enable_bitmapscan;
+--test gather merge
+set enable_hashagg to off;
+explain (costs off)
+	select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+                     QUERY PLAN                     
+----------------------------------------------------
+ Finalize GroupAggregate
+   Group Key: string4
+   ->  Gather Merge
+         Workers Planned: 4
+         ->  Partial GroupAggregate
+               Group Key: string4
+               ->  Sort
+                     Sort Key: string4
+                     ->  Parallel Seq Scan on tenk1
+(9 rows)
+
+select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+ string4 | count 
+---------+-------
+ AAAAxx  |  2500
+ HHHHxx  |  2500
+ OOOOxx  |  2500
+ VVVVxx  |  2500
+(4 rows)
+
+reset enable_hashagg;
 set force_parallel_mode=1;
 explain (costs off)
   select stringu1::int2 from tenk1 where unique1 = 1;
diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out
index d48abd7..568b783 100644
--- a/src/test/regress/expected/sysviews.out
+++ b/src/test/regress/expected/sysviews.out
@@ -73,6 +73,7 @@ select name, setting from pg_settings where name like 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -83,7 +84,7 @@ select name, setting from pg_settings where name like 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 -- Test that the pg_timezone_names and pg_timezone_abbrevs views are
 -- more-or-less working.  We can't test their contents in any great detail
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index f5bc4d1..4657134 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -59,6 +59,16 @@ select  count((unique1)) from tenk1 where hundred > 1;
 reset enable_seqscan;
 reset enable_bitmapscan;
 
+--test gather merge
+set enable_hashagg to off;
+
+explain (costs off)
+	select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+
+select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+
+reset enable_hashagg;
+
 set force_parallel_mode=1;
 
 explain (costs off)
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9f876ae..ac2302c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -780,6 +780,9 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#41Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Amit Kapila (#39)
Re: Gather Merge

On Fri, Feb 17, 2017 at 4:47 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Fri, Feb 17, 2017 at 3:59 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

On Thu, Feb 2, 2017 at 2:32 AM, Rushabh Lathia <rushabh.lathia@gmail.com>

wrote:

Please find attached latest patch.

The latest patch still applies (with some fuzz), builds and the
regression tests pass.

I see that Robert made a number of changes and posted a v6 along with
some numbers which he described as lacklustre, but then fixed a row
estimate problem which was discouraging parallel joins (commit
0c2070ce). Rushabh posted a v7 and test results which look good.

Are you suggesting that commit 0c2070ce has helped to improve
performance if so, I don't think that has been proved? I guess the
numbers are different either due to different m/c or some other
settings like scale factor or work_mem.

I don't really think 0c2070ce is the exact reason. I run the tpch runs
with the same same setting as what Robert was running. I haven't
noticed any regression with the runs. For the last runs I also
uploaded the tpch run outputs for the individual queries for the
reference.

As
far as I can see there are no outstanding issues or unhandled review
feedback. I've had a fresh read through of the latest version and
have no further comments myself.

I've set this to ready-for-committer now. If I've misunderstood and
there are still unresolved issues from that earlier email exchange or
someone else wants to post a review or objection, then of course
please feel free to set it back.

BTW There is no regression test supplied. I see that commit 5262f7a4
adding parallel index scans put simple explain output in
"select_parallel" to demonstrate the new kind of plan being created;

It has added both explain statement test and a test to exercise
parallel index scan code.

Thanks for the reference. I added the similar tests for GM in the
uploaded latest patch.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Rushabh Lathia

#42Amit Kapila
amit.kapila16@gmail.com
In reply to: Rushabh Lathia (#41)
Re: Gather Merge

On Fri, Feb 17, 2017 at 6:27 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Feb 17, 2017 at 4:47 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Are you suggesting that commit 0c2070ce has helped to improve
performance if so, I don't think that has been proved? I guess the
numbers are different either due to different m/c or some other
settings like scale factor or work_mem.

I don't really think 0c2070ce is the exact reason. I run the tpch runs
with the same same setting as what Robert was running. I haven't
noticed any regression with the runs. For the last runs I also
uploaded the tpch run outputs for the individual queries for the
reference.

Okay, then I am not sure why you and Robert are seeing different
results, probably because you are using a different machine. Another
thing which we might need to think about this patch is support of
mark/restore position. As of now the paths which create data in
sorted order like sort node and indexscan have support for
mark/restore position. This is required for correctness when such a
node appears on the inner side of Merge Join. Even though this patch
doesn't support mark/restore, it will not produce wrong results
because planner inserts Materialize node to compensate for it, refer
below code.

final_cost_mergejoin()
{
..
/*
* Even if materializing doesn't look cheaper, we *must* do it if the
* inner path is to be used directly (without sorting) and it doesn't
* support mark/restore.
*
* Since the inner side must be ordered, and only Sorts and IndexScans can
* create order to begin with, and they both support mark/restore, you
* might think there's no problem --- but you'd be wrong. Nestloop and
* merge joins can *preserve* the order of their inputs, so they can be
* selected as the input of a mergejoin, and they don't support
* mark/restore at present.
*
* We don't test the value of enable_material here, because
* materialization is required for correctness in this case, and turning
* it off does not entitle us to deliver an invalid plan.
*/
else if (innersortkeys == NIL &&
!ExecSupportsMarkRestore(inner_path))
path->materialize_inner = true;
..
}

I think there is a value in supporting mark/restore position for any
node which produces sorted results, however, if you don't want to
support it, then I think we should update above comment in the code to
note this fact. Also, you might want to check other places to see if
any similar comment updates are required in case you don't want to
support mark/restore position for GatherMerge.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#43Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#42)
Re: Gather Merge

On Sat, Feb 18, 2017 at 6:43 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think there is a value in supporting mark/restore position for any
node which produces sorted results, however, if you don't want to
support it, then I think we should update above comment in the code to
note this fact. Also, you might want to check other places to see if
any similar comment updates are required in case you don't want to
support mark/restore position for GatherMerge.

I don't think it makes sense to put mark/support restore into Gather
Merge. Maybe somebody else will come up with a test that shows
differently, but ISTM that with something like Sort it makes a ton of
sense to support mark/restore because the Sort node itself can do it
much more cheaply than would be possible with a separate Materialize
node. If you added a separate Materialize node, the Sort node would
be trying to throw away the exact same data that the Materialize node
was trying to accumulate, which is silly. Here with Gather Merge
there is no such thing happening; mark/restore support would require
totally new code - probably we would end up shoving the same code that
already exists in Materialize into Gather Merge as well. That doesn't
seem like a good idea off-hand.

A comment update is probably a good idea, though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#44Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#43)
Re: Gather Merge

On Sun, Feb 19, 2017 at 2:22 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sat, Feb 18, 2017 at 6:43 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think there is a value in supporting mark/restore position for any
node which produces sorted results, however, if you don't want to
support it, then I think we should update above comment in the code to
note this fact. Also, you might want to check other places to see if
any similar comment updates are required in case you don't want to
support mark/restore position for GatherMerge.

I don't think it makes sense to put mark/support restore into Gather
Merge. Maybe somebody else will come up with a test that shows
differently, but ISTM that with something like Sort it makes a ton of
sense to support mark/restore because the Sort node itself can do it
much more cheaply than would be possible with a separate Materialize
node. If you added a separate Materialize node, the Sort node would
be trying to throw away the exact same data that the Materialize node
was trying to accumulate, which is silly.

I am not sure but there might be some cases where adding Materialize
node on top of Sort node could make sense as we try to cost it as well
and add it if it turns out to be cheap.

Here with Gather Merge
there is no such thing happening; mark/restore support would require
totally new code - probably we would end up shoving the same code that
already exists in Materialize into Gather Merge as well.

I have tried to evaluate this based on plans reported by Rushabh and
didn't find any case where GatherMerge can be beneficial by supporting
mark/restore in the plans where it is being used (it is not being used
on the right side of merge join). Now, it is quite possible that it
might be beneficial at higher scale factors or may be planner has
ignored such a plan. However, as we are not clear what kind of
benefits we can get via mark/restore support for GatherMerge, it
doesn't make much sense to take the trouble of implementing it.

A comment update is probably a good idea, though.

Agreed.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#45Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Amit Kapila (#44)
1 attachment(s)
Re: Gather Merge

Thanks Amit for raising this point. I was not at all aware of mark/restore.
I tried to come up with the case, but haven't found such case.

For now here is the patch with comment update.

Thanks,

On Sun, Feb 19, 2017 at 7:30 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Sun, Feb 19, 2017 at 2:22 PM, Robert Haas <robertmhaas@gmail.com>
wrote:

On Sat, Feb 18, 2017 at 6:43 PM, Amit Kapila <amit.kapila16@gmail.com>

wrote:

I think there is a value in supporting mark/restore position for any
node which produces sorted results, however, if you don't want to
support it, then I think we should update above comment in the code to
note this fact. Also, you might want to check other places to see if
any similar comment updates are required in case you don't want to
support mark/restore position for GatherMerge.

I don't think it makes sense to put mark/support restore into Gather
Merge. Maybe somebody else will come up with a test that shows
differently, but ISTM that with something like Sort it makes a ton of
sense to support mark/restore because the Sort node itself can do it
much more cheaply than would be possible with a separate Materialize
node. If you added a separate Materialize node, the Sort node would
be trying to throw away the exact same data that the Materialize node
was trying to accumulate, which is silly.

I am not sure but there might be some cases where adding Materialize
node on top of Sort node could make sense as we try to cost it as well
and add it if it turns out to be cheap.

Here with Gather Merge
there is no such thing happening; mark/restore support would require
totally new code - probably we would end up shoving the same code that
already exists in Materialize into Gather Merge as well.

I have tried to evaluate this based on plans reported by Rushabh and
didn't find any case where GatherMerge can be beneficial by supporting
mark/restore in the plans where it is being used (it is not being used
on the right side of merge join). Now, it is quite possible that it
might be beneficial at higher scale factors or may be planner has
ignored such a plan. However, as we are not clear what kind of
benefits we can get via mark/restore support for GatherMerge, it
doesn't make much sense to take the trouble of implementing it.

A comment update is probably a good idea, though.

Agreed.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Rushabh Lathia

Attachments:

gather-merge-v8-update-comment.patchapplication/x-download; name=gather-merge-v8-update-comment.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 95afc2c..e7dbbff 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3496,6 +3496,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+      <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>enable_gathermerge</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Enables or disables the query planner's use of gather
+        merge plan types. The default is <literal>on</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
       <term><varname>enable_hashagg</varname> (<type>boolean</type>)
       <indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c9e0a3e..0bcee3f 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -905,6 +905,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1394,6 +1397,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 2a2b7eb..c95747e 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
        nodeCustom.o nodeFunctionscan.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 0dd95c6..f00496b 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
 #include "executor/nodeForeignscan.h"
 #include "executor/nodeFunctionscan.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeGroup.h"
 #include "executor/nodeHash.h"
 #include "executor/nodeHashjoin.h"
@@ -320,6 +321,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -525,6 +531,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -687,6 +697,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -820,6 +834,9 @@ ExecShutdownNode(PlanState *node)
 		case T_GatherState:
 			ExecShutdownGather((GatherState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..84c1677
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *		Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+				  bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+					  bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys =
+			palloc0(sizeof(SortSupportData) * node->numCols);
+
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	ExprContext *econtext;
+	int			i;
+
+	/*
+	 * As with Gather, we don't launch workers until this node is actually
+	 * executed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize data structures for workers. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/* Try to launch workers. */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader = palloc(pcxt->nworkers_launched *
+									  sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/*
+	 * Get next tuple, either from one of our workers, or by running the
+	 * plan ourselves.
+	 */
+	slot = gather_merge_getnext(node);
+	if (TupIsNull(slot))
+		return NULL;
+
+	/*
+	 * form the result tuple using ExecProject(), and return it --- unless
+	 * the projection produces an empty set, in which case we must loop
+	 * back around for another tuple
+	 */
+	econtext->ecxt_outertuple = slot;
+	return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+	ExecEndNode(outerPlanState(node));
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+										(gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the merge */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+											heap_compare_slots,
+											gm_state);
+
+	/*
+	 * First, try to read a tuple from each worker (including leader) in
+	 * nowait mode, so that we initialize read from each worker as well as
+	 * leader. After this, if all active workers are unable to produce a
+	 * tuple, then re-read and this time use wait mode. For workers that were
+	 * able to produce a tuple in the earlier loop and are still active, just
+	 * try to fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	GMReaderTupleBuffer *tuple_buffer;
+	HeapTuple	tup = NULL;
+
+	/*
+	 * If we're being asked to generate a tuple from the leader, then we
+	 * just call ExecProcNode as normal to produce one.
+	 */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+
+	/* Otherwise, check the state of the relevant tuple buffer. */
+	tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+	if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+	{
+		/* Return any tuple previously read that is still buffered. */
+		tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	else if (tuple_buffer->done)
+	{
+		/* Reader is known to be exhausted. */
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		/* Read and buffer next tuple. */
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * Attempt to read more tuples in nowait mode and store them in
+		 * the tuple array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+				  bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext;
+
+	tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 05d8538..763a27f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -359,6 +359,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4523,6 +4548,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index b3802b4..afb0fc6 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -457,6 +457,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -1985,6 +2014,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3410,6 +3450,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3740,6 +3783,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index d2f69fe..c01e741 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2095,6 +2095,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2530,6 +2550,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index eeacf81..38da080 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2047,39 +2047,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
 
 /*
  * generate_gather_paths
- *		Generate parallel access paths for a relation by pushing a Gather on
- *		top of a partial path.
+ *		Generate parallel access paths for a relation by pushing a Gather or
+ *		Gather Merge on top of a partial path.
  *
  * This must not be called until after we're done creating all partial paths
  * for the specified relation.  (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
  */
 void
 generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
+	ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
 		return;
 
 	/*
-	 * The output of Gather is currently always unsorted, so there's only one
-	 * partial path of interest: the cheapest one.  That will be the one at
-	 * the front of partial_pathlist because of the way add_partial_path
-	 * works.
-	 *
-	 * Eventually, we should have a Gather Merge operation that can merge
-	 * multiple tuple streams together while preserving their ordering.  We
-	 * could usefully generate such a path from each partial path that has
-	 * non-NIL pathkeys.
+	 * The output of Gather is always unsorted, so there's only one partial
+	 * path of interest: the cheapest one.  That will be the one at the front
+	 * of partial_pathlist because of the way add_partial_path works.
 	 */
 	cheapest_partial_path = linitial(rel->partial_pathlist);
 	simple_gather_path = (Path *)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
+
+	/*
+	 * For each useful ordering, we can consider an order-preserving Gather
+	 * Merge.
+	 */
+	foreach (lc, rel->partial_pathlist)
+	{
+		Path   *subpath = (Path *) lfirst(lc);
+		GatherMergePath   *path;
+
+		if (subpath->pathkeys == NIL)
+			continue;
+
+		path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+										subpath->pathkeys, NULL, NULL);
+		add_path(rel, &path->path);
+	}
 }
 
 /*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index d01630f..7475aef 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Add one to the number of workers to account for the leader.  This might
+	 * be overgenerous since the leader will do less work than other workers
+	 * in typical cases, but we'll go with it for now.
+	 */
+	Assert(path->num_workers > 0);
+	N = (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost.  Since Gather Merge, unlike
+	 * Gather, requires us to block until a tuple is available from every
+	 * worker, we bump the IPC cost up a little bit as compared with Gather.
+	 * For lack of a better idea, charge an extra 5%.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
@@ -2531,9 +2599,9 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path,
 	 *
 	 * Since the inner side must be ordered, and only Sorts and IndexScans can
 	 * create order to begin with, and they both support mark/restore, you
-	 * might think there's no problem --- but you'd be wrong.  Nestloop and
-	 * merge joins can *preserve* the order of their inputs, so they can be
-	 * selected as the input of a mergejoin, and they don't support
+	 * might think there's no problem --- but you'd be wrong.  Nestloop, merge
+	 * joins and gather merge can *preserve* the order of their inputs, so they
+	 * can be selected as the input of a mergejoin, and they don't support
 	 * mark/restore at present.
 	 *
 	 * We don't test the value of enable_material here, because
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 997bdcf..e08a6c3 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -272,6 +272,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+						 GatherMergePath *best_path);
 
 
 /*
@@ -469,6 +471,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+											  (GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -1439,6 +1445,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
 }
 
 /*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather Merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	/* As with Gather, it's best to project away columns in the workers. */
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	/* See create_merge_append_plan for why there's no make_xxx function */
+	gm_plan = makeNode(GatherMerge);
+	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->num_workers = best_path->num_workers;
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
+	Assert(pathkeys != NIL);
+
+	/* Compute sort column info, and adjust GatherMerge tlist as needed */
+	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+									  best_path->path.parent->relids,
+									  NULL,
+									  true,
+									  &gm_plan->numCols,
+									  &gm_plan->sortColIdx,
+									  &gm_plan->sortOperators,
+									  &gm_plan->collations,
+									  &gm_plan->nullsFirst);
+
+
+	/* Compute sort column info, and adjust subplan's tlist as needed */
+	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+										 best_path->subpath->parent->relids,
+										 gm_plan->sortColIdx,
+										 false,
+										 &numsortkeys,
+										 &sortColIdx,
+										 &sortOperators,
+										 &collations,
+										 &nullsFirst);
+
+	/* As for MergeAppend, check that we got the same sort key information. */
+	Assert(numsortkeys == gm_plan->numCols);
+	if (memcmp(sortColIdx, gm_plan->sortColIdx,
+			   numsortkeys * sizeof(AttrNumber)) != 0)
+		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+	Assert(memcmp(sortOperators, gm_plan->sortOperators,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(collations, gm_plan->collations,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+				  numsortkeys * sizeof(bool)) == 0);
+
+	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+		subplan = (Plan *) make_sort(subplan, numsortkeys,
+									 sortColIdx, sortOperators,
+									 collations, nullsFirst);
+
+	/* Now insert the subplan under GatherMerge. */
+	gm_plan->plan.lefttree = subplan;
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
+
+/*
  * create_projection_plan
  *
  *	  Create a plan tree to do a projection step and (recursively) plans
@@ -2277,7 +2363,6 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
 	return plan;
 }
 
-
 /*****************************************************************************
  *
  *	BASE-RELATION SCAN METHODS
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 3d33d46..c1c9046 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3652,8 +3652,7 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path.  We can do this using either Gather or Gather Merge.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
@@ -3700,6 +3699,70 @@ create_grouping_paths(PlannerInfo *root,
 										   parse->groupClause,
 										   (List *) parse->havingQual,
 										   dNumGroups));
+
+			/*
+			 * The point of using Gather Merge rather than Gather is that it
+			 * can preserve the ordering of the input path, so there's no
+			 * reason to try it unless (1) it's possible to produce more than
+			 * one output row and (2) we want the output path to be ordered.
+			 */
+			if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *subpath = (Path *) lfirst(lc);
+					Path	   *gmpath;
+					double		total_groups;
+
+					/*
+					 * It's useful to consider paths that are already properly
+					 * ordered for Gather Merge, because those don't need a
+					 * sort.  It's also useful to consider the cheapest path,
+					 * because sorting it in parallel and then doing Gather
+					 * Merge may be better than doing an unordered Gather
+					 * followed by a sort.  But there's no point in
+					 * considering non-cheapest paths that aren't already
+					 * sorted correctly.
+					 */
+					if (path != subpath &&
+						!pathkeys_contained_in(root->group_pathkeys,
+											   subpath->pathkeys))
+						continue;
+
+					total_groups = subpath->rows * subpath->parallel_workers;
+
+					gmpath = (Path *)
+						create_gather_merge_path(root,
+												 grouped_rel,
+												 subpath,
+												 NULL,
+												 root->group_pathkeys,
+												 NULL,
+												 &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+								 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								 create_group_path(root,
+												   grouped_rel,
+												   gmpath,
+												   target,
+												   parse->groupClause,
+												   (List *) parse->havingQual,
+												   dNumGroups));
+				}
+			}
 		}
 	}
 
@@ -3797,6 +3860,16 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * We've been using the partial pathlist for the grouped relation to hold
+	 * partially aggregated paths, but that's actually a little bit bogus
+	 * because it's unsafe for later planning stages -- like ordered_rel ---
+	 * to get the idea that they can use these partial paths as if they didn't
+	 * need a FinalizeAggregate step.  Zap the partial pathlist at this stage
+	 * so we don't get confused.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4266,6 +4339,56 @@ create_ordered_paths(PlannerInfo *root,
 	}
 
 	/*
+	 * generate_gather_paths() will have already generated a simple Gather
+	 * path for the best parallel path, if any, and the loop above will have
+	 * considered sorting it.  Similarly, generate_gather_paths() will also
+	 * have generated order-preserving Gather Merge plans which can be used
+	 * without sorting if they happen to match the sort_pathkeys, and the loop
+	 * above will have handled those as well.  However, there's one more
+	 * possibility: it may make sense to sort the cheapest partial path
+	 * according to the required output order and then use Gather Merge.
+	 */
+	if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+		input_rel->partial_pathlist != NIL)
+	{
+		Path	   *cheapest_partial_path;
+
+		cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+		/*
+		 * If cheapest partial path doesn't need a sort, this is redundant
+		 * with what's already been tried.
+		 */
+		if (!pathkeys_contained_in(root->sort_pathkeys,
+								   cheapest_partial_path->pathkeys))
+		{
+			Path	   *path;
+			double		total_groups;
+
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 cheapest_partial_path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+
+			total_groups = cheapest_partial_path->rows *
+				cheapest_partial_path->parallel_workers;
+			path = (Path *)
+				create_gather_merge_path(root, ordered_rel,
+										 path,
+										 target, root->sort_pathkeys, NULL,
+										 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
+	/*
 	 * If there is an FDW that's responsible for all baserels of the query,
 	 * let it consider adding ForeignPaths.
 	 */
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index be267b9..cc1c66e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -604,6 +604,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 7954c44..c82d654 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2695,6 +2695,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 3248296..398e5dd 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1627,6 +1627,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5d8fb2e..92f4463 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -901,6 +901,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 9f41bab..e744d3d 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2010,6 +2010,35 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan which produces sorted output in each worker, and then
+ *		merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;		/* tuple buffer per
+														 * reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 95dd8ba..3530e41 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -76,6 +76,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -125,6 +126,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -246,6 +248,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_ProjectSetPath,
 	T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index f72f7a8..8dbce7a 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -785,6 +785,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index f7ac6f6..05d6f07 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1204,6 +1204,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 72200fa..fdb677f 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -200,5 +201,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 53cad24..1f68334 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -77,6 +77,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
 extern GatherPath *create_gather_path(PlannerInfo *root,
 				   RelOptInfo *rel, Path *subpath, PathTarget *target,
 				   Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel,
+												 Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
 						 RelOptInfo *rel, Path *subpath,
 						 List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 48fb80e..a0b7caa 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -148,6 +148,33 @@ select  count((unique1)) from tenk1 where hundred > 1;
 
 reset enable_seqscan;
 reset enable_bitmapscan;
+--test gather merge
+set enable_hashagg to off;
+explain (costs off)
+	select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+                     QUERY PLAN                     
+----------------------------------------------------
+ Finalize GroupAggregate
+   Group Key: string4
+   ->  Gather Merge
+         Workers Planned: 4
+         ->  Partial GroupAggregate
+               Group Key: string4
+               ->  Sort
+                     Sort Key: string4
+                     ->  Parallel Seq Scan on tenk1
+(9 rows)
+
+select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+ string4 | count 
+---------+-------
+ AAAAxx  |  2500
+ HHHHxx  |  2500
+ OOOOxx  |  2500
+ VVVVxx  |  2500
+(4 rows)
+
+reset enable_hashagg;
 set force_parallel_mode=1;
 explain (costs off)
   select stringu1::int2 from tenk1 where unique1 = 1;
diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out
index d48abd7..568b783 100644
--- a/src/test/regress/expected/sysviews.out
+++ b/src/test/regress/expected/sysviews.out
@@ -73,6 +73,7 @@ select name, setting from pg_settings where name like 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -83,7 +84,7 @@ select name, setting from pg_settings where name like 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 -- Test that the pg_timezone_names and pg_timezone_abbrevs views are
 -- more-or-less working.  We can't test their contents in any great detail
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index f5bc4d1..4657134 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -59,6 +59,16 @@ select  count((unique1)) from tenk1 where hundred > 1;
 reset enable_seqscan;
 reset enable_bitmapscan;
 
+--test gather merge
+set enable_hashagg to off;
+
+explain (costs off)
+	select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+
+select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+
+reset enable_hashagg;
+
 set force_parallel_mode=1;
 
 explain (costs off)
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9f876ae..ac2302c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -780,6 +780,9 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#46Dilip Kumar
dilipbalaut@gmail.com
In reply to: Rushabh Lathia (#45)
Re: Gather Merge

On Mon, Feb 20, 2017 at 12:05 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Thanks Amit for raising this point. I was not at all aware of mark/restore.
I tried to come up with the case, but haven't found such case.

For now here is the patch with comment update.

I think for reproducing this you need plan something like below (I
think this is a really bad plan, but you can use to test this
particular case).

MergeJoin
-> Index Scan
-> Gather Merge
->Parallel Index Scan

So if only IndexScan node is there as a inner node which support
Mark/Restore then we don't need to insert any materialize node. But
after we put Gather Merge (which don't support Mark/Restore) then we
need a materialize node on top of that. Therefore, plan should become
like this, I think so.
(But anyway if we have the Gather instead of the GatherMerge we would
required a Sort node on top of the Gather and Materialize is obviously
cheaper than the Sort.)

MergeJoin
-> Index Scan
-> Materialize
-> Gather Merge (Does not support mark/restore)
->Parallel Index Scan

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#47Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#46)
Re: Gather Merge

On Mon, Feb 20, 2017 at 1:58 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Feb 20, 2017 at 12:05 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Thanks Amit for raising this point. I was not at all aware of mark/restore.
I tried to come up with the case, but haven't found such case.

For now here is the patch with comment update.

I think for reproducing this you need plan something like below (I
think this is a really bad plan, but you can use to test this
particular case).

MergeJoin
-> Index Scan
-> Gather Merge
->Parallel Index Scan

So if only IndexScan node is there as a inner node which support
Mark/Restore then we don't need to insert any materialize node. But
after we put Gather Merge (which don't support Mark/Restore) then we
need a materialize node on top of that. Therefore, plan should become
like this, I think so.
(But anyway if we have the Gather instead of the GatherMerge we would
required a Sort node on top of the Gather and Materialize is obviously
cheaper than the Sort.)

MergeJoin
-> Index Scan
-> Materialize
-> Gather Merge (Does not support mark/restore)
->Parallel Index Scan

Yes, exactly that's what will happen, however, we are not sure how
many times such plan (Gather Merge on inner side of merge join) will
be helpful and whether adding Mark/Restore support will make it any
faster than just adding Materialize on top of Gather Merge. So, it
seems better not to go there unless we see some use of it.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#48Robert Haas
robertmhaas@gmail.com
In reply to: Rushabh Lathia (#45)
Re: Gather Merge

On Mon, Feb 20, 2017 at 1:35 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Thanks Amit for raising this point. I was not at all aware of mark/restore.
I tried to come up with the case, but haven't found such case.

For now here is the patch with comment update.

Sorry for the delay in getting back to this. This seems to need minor
rebasing again.

A few other points:

ExecEndGatherMerge needs to be patched along the lines of
acf555bc53acb589b5a2827e65d655fa8c9adee0.

@@ -2290,7 +2376,6 @@ create_limit_plan(PlannerInfo *root, LimitPath
*best_path, int flags)
return plan;
}

-
/*****************************************************************************
*
* BASE-RELATION SCAN METHODS

Unnecessary.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#49Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Robert Haas (#48)
1 attachment(s)
Re: Gather Merge

On Thu, Mar 9, 2017 at 8:40 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Mon, Feb 20, 2017 at 1:35 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Thanks Amit for raising this point. I was not at all aware of

mark/restore.

I tried to come up with the case, but haven't found such case.

For now here is the patch with comment update.

Sorry for the delay in getting back to this. This seems to need minor
rebasing again.

Done.

A few other points:

ExecEndGatherMerge needs to be patched along the lines of
acf555bc53acb589b5a2827e65d655fa8c9adee0.

Done.

@@ -2290,7 +2376,6 @@ create_limit_plan(PlannerInfo *root, LimitPath
*best_path, int flags)
return plan;
}

-
/***********************************************************
******************
*
* BASE-RELATION SCAN METHODS

Unnecessary.

Fixed.

Here is another version of patch with the suggested changes.

Thanks,

Rushabh Lathia
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

gather-merge-v9.patchapplication/x-download; name=gather-merge-v9.patchDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 1881236..69844e5 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3497,6 +3497,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge">
+      <term><varname>enable_gathermerge</varname> (<type>boolean</type>)
+      <indexterm>
+       <primary><varname>enable_gathermerge</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Enables or disables the query planner's use of gather
+        merge plan types. The default is <literal>on</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg">
       <term><varname>enable_hashagg</varname> (<type>boolean</type>)
       <indexterm>
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 6fd82e9..c9b55ea 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -918,6 +918,9 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		case T_Gather:
 			pname = sname = "Gather";
 			break;
+		case T_GatherMerge:
+			pname = sname = "Gather Merge";
+			break;
 		case T_IndexScan:
 			pname = sname = "Index Scan";
 			break;
@@ -1411,6 +1414,26 @@ ExplainNode(PlanState *planstate, List *ancestors,
 					ExplainPropertyBool("Single Copy", gather->single_copy, es);
 			}
 			break;
+		case T_GatherMerge:
+			{
+				GatherMerge *gm = (GatherMerge *) plan;
+
+				show_scan_qual(plan->qual, "Filter", planstate, ancestors, es);
+				if (plan->qual)
+					show_instrumentation_count("Rows Removed by Filter", 1,
+											   planstate, es);
+				ExplainPropertyInteger("Workers Planned",
+									   gm->num_workers, es);
+				if (es->analyze)
+				{
+					int			nworkers;
+
+					nworkers = ((GatherMergeState *) planstate)->nworkers_launched;
+					ExplainPropertyInteger("Workers Launched",
+										   nworkers, es);
+				}
+			}
+			break;
 		case T_FunctionScan:
 			if (es->verbose)
 			{
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index a9893c2..d281906 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -20,7 +20,7 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
        nodeBitmapHeapscan.o nodeBitmapIndexscan.o \
        nodeCustom.o nodeFunctionscan.o nodeGather.o \
        nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeIndexonlyscan.o \
-       nodeLimit.o nodeLockRows.o \
+       nodeLimit.o nodeLockRows.o nodeGatherMerge.o \
        nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
        nodeNestloop.o nodeProjectSet.o nodeRecursiveunion.o nodeResult.o \
        nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 468f50e..80c77ad 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -89,6 +89,7 @@
 #include "executor/nodeForeignscan.h"
 #include "executor/nodeFunctionscan.h"
 #include "executor/nodeGather.h"
+#include "executor/nodeGatherMerge.h"
 #include "executor/nodeGroup.h"
 #include "executor/nodeHash.h"
 #include "executor/nodeHashjoin.h"
@@ -326,6 +327,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
 												  estate, eflags);
 			break;
 
+		case T_GatherMerge:
+			result = (PlanState *) ExecInitGatherMerge((GatherMerge *) node,
+													   estate, eflags);
+			break;
+
 		case T_Hash:
 			result = (PlanState *) ExecInitHash((Hash *) node,
 												estate, eflags);
@@ -535,6 +541,10 @@ ExecProcNode(PlanState *node)
 			result = ExecGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			result = ExecGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_HashState:
 			result = ExecHash((HashState *) node);
 			break;
@@ -697,6 +707,10 @@ ExecEndNode(PlanState *node)
 			ExecEndGather((GatherState *) node);
 			break;
 
+		case T_GatherMergeState:
+			ExecEndGatherMerge((GatherMergeState *) node);
+			break;
+
 		case T_IndexScanState:
 			ExecEndIndexScan((IndexScanState *) node);
 			break;
@@ -842,6 +856,9 @@ ExecShutdownNode(PlanState *node)
 		case T_CustomScanState:
 			ExecShutdownCustomScan((CustomScanState *) node);
 			break;
+		case T_GatherMergeState:
+			ExecShutdownGatherMerge((GatherMergeState *) node);
+			break;
 		default:
 			break;
 	}
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
new file mode 100644
index 0000000..62a6b18
--- /dev/null
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.c
+ *		Scan a plan in multiple workers, and do order-preserving merge.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/backend/executor/nodeGatherMerge.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "executor/execdebug.h"
+#include "executor/execParallel.h"
+#include "executor/nodeGatherMerge.h"
+#include "executor/nodeSubplan.h"
+#include "executor/tqueue.h"
+#include "lib/binaryheap.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+#include "utils/rel.h"
+
+/*
+ * Tuple array for each worker
+ */
+typedef struct GMReaderTupleBuffer
+{
+	HeapTuple  *tuple;
+	int			readCounter;
+	int			nTuples;
+	bool		done;
+}	GMReaderTupleBuffer;
+
+/*
+ * When we read tuples from workers, it's a good idea to read several at once
+ * for efficiency when possible: this minimizes context-switching overhead.
+ * But reading too many at a time wastes memory without improving performance.
+ */
+#define MAX_TUPLE_STORE 10
+
+static int32 heap_compare_slots(Datum a, Datum b, void *arg);
+static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state);
+static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader,
+				  bool nowait, bool *done);
+static void gather_merge_init(GatherMergeState *gm_state);
+static void ExecShutdownGatherMergeWorkers(GatherMergeState *node);
+static bool gather_merge_readnext(GatherMergeState *gm_state, int reader,
+					  bool nowait);
+static void form_tuple_array(GatherMergeState *gm_state, int reader);
+
+/* ----------------------------------------------------------------
+ *		ExecInitGather
+ * ----------------------------------------------------------------
+ */
+GatherMergeState *
+ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags)
+{
+	GatherMergeState *gm_state;
+	Plan	   *outerNode;
+	bool		hasoid;
+	TupleDesc	tupDesc;
+
+	/* Gather merge node doesn't have innerPlan node. */
+	Assert(innerPlan(node) == NULL);
+
+	/*
+	 * create state structure
+	 */
+	gm_state = makeNode(GatherMergeState);
+	gm_state->ps.plan = (Plan *) node;
+	gm_state->ps.state = estate;
+
+	/*
+	 * Miscellaneous initialization
+	 *
+	 * create expression context for node
+	 */
+	ExecAssignExprContext(estate, &gm_state->ps);
+
+	/*
+	 * initialize child expressions
+	 */
+	gm_state->ps.targetlist = (List *)
+		ExecInitExpr((Expr *) node->plan.targetlist,
+					 (PlanState *) gm_state);
+	gm_state->ps.qual = (List *)
+		ExecInitExpr((Expr *) node->plan.qual,
+					 (PlanState *) gm_state);
+
+	/*
+	 * tuple table initialization
+	 */
+	ExecInitResultTupleSlot(estate, &gm_state->ps);
+
+	/*
+	 * now initialize outer plan
+	 */
+	outerNode = outerPlan(node);
+	outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags);
+
+	/*
+	 * Initialize result tuple type and projection info.
+	 */
+	ExecAssignResultTypeFromTL(&gm_state->ps);
+	ExecAssignProjectionInfo(&gm_state->ps, NULL);
+
+	gm_state->gm_initialized = false;
+
+	/*
+	 * initialize sort-key information
+	 */
+	if (node->numCols)
+	{
+		int			i;
+
+		gm_state->gm_nkeys = node->numCols;
+		gm_state->gm_sortkeys =
+			palloc0(sizeof(SortSupportData) * node->numCols);
+
+		for (i = 0; i < node->numCols; i++)
+		{
+			SortSupport sortKey = gm_state->gm_sortkeys + i;
+
+			sortKey->ssup_cxt = CurrentMemoryContext;
+			sortKey->ssup_collation = node->collations[i];
+			sortKey->ssup_nulls_first = node->nullsFirst[i];
+			sortKey->ssup_attno = node->sortColIdx[i];
+
+			/*
+			 * We don't perform abbreviated key conversion here, for the same
+			 * reasons that it isn't used in MergeAppend
+			 */
+			sortKey->abbreviate = false;
+
+			PrepareSortSupportFromOrderingOp(node->sortOperators[i], sortKey);
+		}
+	}
+
+	/*
+	 * store the tuple descriptor into gather merge state, so we can use it
+	 * later while initializing the gather merge slots.
+	 */
+	if (!ExecContextForcesOids(&gm_state->ps, &hasoid))
+		hasoid = false;
+	tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid);
+	gm_state->tupDesc = tupDesc;
+
+	return gm_state;
+}
+
+/* ----------------------------------------------------------------
+ *		ExecGatherMerge(node)
+ *
+ *		Scans the relation via multiple workers and returns
+ *		the next qualifying tuple.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecGatherMerge(GatherMergeState *node)
+{
+	TupleTableSlot *slot;
+	ExprContext *econtext;
+	int			i;
+
+	/*
+	 * As with Gather, we don't launch workers until this node is actually
+	 * executed.
+	 */
+	if (!node->initialized)
+	{
+		EState	   *estate = node->ps.state;
+		GatherMerge *gm = (GatherMerge *) node->ps.plan;
+
+		/*
+		 * Sometimes we might have to run without parallelism; but if parallel
+		 * mode is active then we can try to fire up some workers.
+		 */
+		if (gm->num_workers > 0 && IsInParallelMode())
+		{
+			ParallelContext *pcxt;
+
+			/* Initialize data structures for workers. */
+			if (!node->pei)
+				node->pei = ExecInitParallelPlan(node->ps.lefttree,
+												 estate,
+												 gm->num_workers);
+
+			/* Try to launch workers. */
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
+			node->nworkers_launched = pcxt->nworkers_launched;
+
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers_launched > 0)
+			{
+				node->nreaders = 0;
+				node->reader = palloc(pcxt->nworkers_launched *
+									  sizeof(TupleQueueReader *));
+
+				Assert(gm->numCols);
+
+				for (i = 0; i < pcxt->nworkers_launched; ++i)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   node->tupDesc);
+				}
+			}
+			else
+			{
+				/* No workers?	Then never mind. */
+				ExecShutdownGatherMergeWorkers(node);
+			}
+		}
+
+		/* always allow leader to participate */
+		node->need_to_scan_locally = true;
+		node->initialized = true;
+	}
+
+	/*
+	 * Reset per-tuple memory context to free any expression evaluation
+	 * storage allocated in the previous tuple cycle.
+	 */
+	econtext = node->ps.ps_ExprContext;
+	ResetExprContext(econtext);
+
+	/*
+	 * Get next tuple, either from one of our workers, or by running the
+	 * plan ourselves.
+	 */
+	slot = gather_merge_getnext(node);
+	if (TupIsNull(slot))
+		return NULL;
+
+	/*
+	 * form the result tuple using ExecProject(), and return it --- unless
+	 * the projection produces an empty set, in which case we must loop
+	 * back around for another tuple
+	 */
+	econtext->ecxt_outertuple = slot;
+	return ExecProject(node->ps.ps_ProjInfo);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecEndGatherMerge
+ *
+ *		frees any storage allocated through C routines.
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndGatherMerge(GatherMergeState *node)
+{
+	ExecEndNode(outerPlanState(node));      /* let children clean up first */
+	ExecShutdownGatherMerge(node);
+	ExecFreeExprContext(&node->ps);
+	ExecClearTuple(node->ps.ps_ResultTupleSlot);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMerge
+ *
+ *		Destroy the setup for parallel workers including parallel context.
+ *		Collect all the stats after workers are stopped, else some work
+ *		done by workers won't be accounted.
+ * ----------------------------------------------------------------
+ */
+void
+ExecShutdownGatherMerge(GatherMergeState *node)
+{
+	ExecShutdownGatherMergeWorkers(node);
+
+	/* Now destroy the parallel context. */
+	if (node->pei != NULL)
+	{
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
+}
+
+/* ----------------------------------------------------------------
+ *		ExecShutdownGatherMergeWorkers
+ *
+ *		Destroy the parallel workers.  Collect all the stats after
+ *		workers are stopped, else some work done by workers won't be
+ *		accounted.
+ * ----------------------------------------------------------------
+ */
+static void
+ExecShutdownGatherMergeWorkers(GatherMergeState *node)
+{
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
+	{
+		int			i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			if (node->reader[i])
+				DestroyTupleQueueReader(node->reader[i]);
+
+		pfree(node->reader);
+		node->reader = NULL;
+	}
+
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+		ExecParallelFinish(node->pei);
+}
+
+/* ----------------------------------------------------------------
+ *		ExecReScanGatherMerge
+ *
+ *		Re-initialize the workers and rescans a relation via them.
+ * ----------------------------------------------------------------
+ */
+void
+ExecReScanGatherMerge(GatherMergeState *node)
+{
+	/*
+	 * Re-initialize the parallel workers to perform rescan of relation. We
+	 * want to gracefully shutdown all the workers so that they should be able
+	 * to propagate any error or other information to master backend before
+	 * dying.  Parallel context will be reused for rescan.
+	 */
+	ExecShutdownGatherMergeWorkers(node);
+
+	node->initialized = false;
+
+	if (node->pei)
+		ExecParallelReinitialize(node->pei);
+
+	ExecReScan(node->ps.lefttree);
+}
+
+/*
+ * Initialize the Gather merge tuple read.
+ *
+ * Pull at least a single tuple from each worker + leader and set up the heap.
+ */
+static void
+gather_merge_init(GatherMergeState *gm_state)
+{
+	int			nreaders = gm_state->nreaders;
+	bool		initialize = true;
+	int			i;
+
+	/*
+	 * Allocate gm_slots for the number of worker + one more slot for leader.
+	 * Last slot is always for leader. Leader always calls ExecProcNode() to
+	 * read the tuple which will return the TupleTableSlot. Later it will
+	 * directly get assigned to gm_slot. So just initialize leader gm_slot
+	 * with NULL. For other slots below code will call
+	 * ExecInitExtraTupleSlot() which will do the initialization of worker
+	 * slots.
+	 */
+	gm_state->gm_slots =
+		palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *));
+	gm_state->gm_slots[gm_state->nreaders] = NULL;
+
+	/* Initialize the tuple slot and tuple array for each worker */
+	gm_state->gm_tuple_buffers =
+		(GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) *
+										(gm_state->nreaders + 1));
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		/* Allocate the tuple array with MAX_TUPLE_STORE size */
+		gm_state->gm_tuple_buffers[i].tuple =
+			(HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE);
+
+		/* Initialize slot for worker */
+		gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state);
+		ExecSetSlotDescriptor(gm_state->gm_slots[i],
+							  gm_state->tupDesc);
+	}
+
+	/* Allocate the resources for the merge */
+	gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1,
+											heap_compare_slots,
+											gm_state);
+
+	/*
+	 * First, try to read a tuple from each worker (including leader) in
+	 * nowait mode, so that we initialize read from each worker as well as
+	 * leader. After this, if all active workers are unable to produce a
+	 * tuple, then re-read and this time use wait mode. For workers that were
+	 * able to produce a tuple in the earlier loop and are still active, just
+	 * try to fill the tuple array if more tuples are avaiable.
+	 */
+reread:
+	for (i = 0; i < nreaders + 1; i++)
+	{
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+		{
+			if (gather_merge_readnext(gm_state, i, initialize))
+			{
+				binaryheap_add_unordered(gm_state->gm_heap,
+										 Int32GetDatum(i));
+			}
+		}
+		else
+			form_tuple_array(gm_state, i);
+	}
+	initialize = false;
+
+	for (i = 0; i < nreaders; i++)
+		if (!gm_state->gm_tuple_buffers[i].done &&
+			(TupIsNull(gm_state->gm_slots[i]) ||
+			 gm_state->gm_slots[i]->tts_isempty))
+			goto reread;
+
+	binaryheap_build(gm_state->gm_heap);
+	gm_state->gm_initialized = true;
+}
+
+/*
+ * Clear out a slot in the tuple table for each gather merge
+ * slot and return the clear cleared slot.
+ */
+static TupleTableSlot *
+gather_merge_clear_slots(GatherMergeState *gm_state)
+{
+	int			i;
+
+	for (i = 0; i < gm_state->nreaders; i++)
+	{
+		pfree(gm_state->gm_tuple_buffers[i].tuple);
+		gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]);
+	}
+
+	/* Free tuple array as we don't need it any more */
+	pfree(gm_state->gm_tuple_buffers);
+	/* Free the binaryheap, which was created for sort */
+	binaryheap_free(gm_state->gm_heap);
+
+	/* return any clear slot */
+	return gm_state->gm_slots[0];
+}
+
+/*
+ * Read the next tuple for gather merge.
+ *
+ * Fetch the sorted tuple out of the heap.
+ */
+static TupleTableSlot *
+gather_merge_getnext(GatherMergeState *gm_state)
+{
+	int			i;
+
+	/*
+	 * First time through: pull the first tuple from each participate, and set
+	 * up the heap.
+	 */
+	if (gm_state->gm_initialized == false)
+		gather_merge_init(gm_state);
+	else
+	{
+		/*
+		 * Otherwise, pull the next tuple from whichever participant we
+		 * returned from last time, and reinsert the index into the heap,
+		 * because it might now compare differently against the existing
+		 * elements of the heap.
+		 */
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+
+		if (gather_merge_readnext(gm_state, i, false))
+			binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i));
+		else
+			(void) binaryheap_remove_first(gm_state->gm_heap);
+	}
+
+	if (binaryheap_empty(gm_state->gm_heap))
+	{
+		/* All the queues are exhausted, and so is the heap */
+		return gather_merge_clear_slots(gm_state);
+	}
+	else
+	{
+		i = DatumGetInt32(binaryheap_first(gm_state->gm_heap));
+		return gm_state->gm_slots[i];
+	}
+
+	return gather_merge_clear_slots(gm_state);
+}
+
+/*
+ * Read the tuple for given reader in nowait mode, and form the tuple array.
+ */
+static void
+form_tuple_array(GatherMergeState *gm_state, int reader)
+{
+	GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+	int			i;
+
+	/* Last slot is for leader and we don't build tuple array for leader */
+	if (reader == gm_state->nreaders)
+		return;
+
+	/*
+	 * We here because we already read all the tuples from the tuple array, so
+	 * initialize the counter to zero.
+	 */
+	if (tuple_buffer->nTuples == tuple_buffer->readCounter)
+		tuple_buffer->nTuples = tuple_buffer->readCounter = 0;
+
+	/* Tuple array is already full? */
+	if (tuple_buffer->nTuples == MAX_TUPLE_STORE)
+		return;
+
+	for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++)
+	{
+		tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state,
+																  reader,
+																  false,
+													   &tuple_buffer->done));
+		if (!HeapTupleIsValid(tuple_buffer->tuple[i]))
+			break;
+		tuple_buffer->nTuples++;
+	}
+}
+
+/*
+ * Store the next tuple for a given reader into the appropriate slot.
+ *
+ * Returns false if the reader is exhausted, and true otherwise.
+ */
+static bool
+gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
+{
+	GMReaderTupleBuffer *tuple_buffer;
+	HeapTuple	tup = NULL;
+
+	/*
+	 * If we're being asked to generate a tuple from the leader, then we
+	 * just call ExecProcNode as normal to produce one.
+	 */
+	if (gm_state->nreaders == reader)
+	{
+		if (gm_state->need_to_scan_locally)
+		{
+			PlanState  *outerPlan = outerPlanState(gm_state);
+			TupleTableSlot *outerTupleSlot;
+
+			outerTupleSlot = ExecProcNode(outerPlan);
+
+			if (!TupIsNull(outerTupleSlot))
+			{
+				gm_state->gm_slots[reader] = outerTupleSlot;
+				return true;
+			}
+			gm_state->gm_tuple_buffers[reader].done = true;
+			gm_state->need_to_scan_locally = false;
+		}
+		return false;
+	}
+
+	/* Otherwise, check the state of the relevant tuple buffer. */
+	tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+
+	if (tuple_buffer->nTuples > tuple_buffer->readCounter)
+	{
+		/* Return any tuple previously read that is still buffered. */
+		tuple_buffer = &gm_state->gm_tuple_buffers[reader];
+		tup = tuple_buffer->tuple[tuple_buffer->readCounter++];
+	}
+	else if (tuple_buffer->done)
+	{
+		/* Reader is known to be exhausted. */
+		DestroyTupleQueueReader(gm_state->reader[reader]);
+		gm_state->reader[reader] = NULL;
+		return false;
+	}
+	else
+	{
+		/* Read and buffer next tuple. */
+		tup = heap_copytuple(gm_readnext_tuple(gm_state,
+											   reader,
+											   nowait,
+											   &tuple_buffer->done));
+
+		/*
+		 * Attempt to read more tuples in nowait mode and store them in
+		 * the tuple array.
+		 */
+		if (HeapTupleIsValid(tup))
+			form_tuple_array(gm_state, reader);
+		else
+			return false;
+	}
+
+	Assert(HeapTupleIsValid(tup));
+
+	/* Build the TupleTableSlot for the given tuple */
+	ExecStoreTuple(tup,			/* tuple to store */
+				   gm_state->gm_slots[reader],	/* slot in which to store the
+												 * tuple */
+				   InvalidBuffer,		/* buffer associated with this tuple */
+				   true);		/* pfree this pointer if not from heap */
+
+	return true;
+}
+
+/*
+ * Attempt to read a tuple from given reader.
+ */
+static HeapTuple
+gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
+				  bool *done)
+{
+	TupleQueueReader *reader;
+	HeapTuple	tup = NULL;
+	MemoryContext oldContext;
+	MemoryContext tupleContext;
+
+	tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory;
+
+	if (done != NULL)
+		*done = false;
+
+	/* Check for async events, particularly messages from workers. */
+	CHECK_FOR_INTERRUPTS();
+
+	/* Attempt to read a tuple. */
+	reader = gm_state->reader[nreader];
+
+	/* Run TupleQueueReaders in per-tuple context */
+	oldContext = MemoryContextSwitchTo(tupleContext);
+	tup = TupleQueueReaderNext(reader, nowait, done);
+	MemoryContextSwitchTo(oldContext);
+
+	return tup;
+}
+
+/*
+ * We have one slot for each item in the heap array.  We use SlotNumber
+ * to store slot indexes.  This doesn't actually provide any formal
+ * type-safety, but it makes the code more self-documenting.
+ */
+typedef int32 SlotNumber;
+
+/*
+ * Compare the tuples in the two given slots.
+ */
+static int32
+heap_compare_slots(Datum a, Datum b, void *arg)
+{
+	GatherMergeState *node = (GatherMergeState *) arg;
+	SlotNumber	slot1 = DatumGetInt32(a);
+	SlotNumber	slot2 = DatumGetInt32(b);
+
+	TupleTableSlot *s1 = node->gm_slots[slot1];
+	TupleTableSlot *s2 = node->gm_slots[slot2];
+	int			nkey;
+
+	Assert(!TupIsNull(s1));
+	Assert(!TupIsNull(s2));
+
+	for (nkey = 0; nkey < node->gm_nkeys; nkey++)
+	{
+		SortSupport sortKey = node->gm_sortkeys + nkey;
+		AttrNumber	attno = sortKey->ssup_attno;
+		Datum		datum1,
+					datum2;
+		bool		isNull1,
+					isNull2;
+		int			compare;
+
+		datum1 = slot_getattr(s1, attno, &isNull1);
+		datum2 = slot_getattr(s2, attno, &isNull2);
+
+		compare = ApplySortComparator(datum1, isNull1,
+									  datum2, isNull2,
+									  sortKey);
+		if (compare != 0)
+			return -compare;
+	}
+	return 0;
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index ac8e50e..bfc2ac1 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -360,6 +360,31 @@ _copyGather(const Gather *from)
 	return newnode;
 }
 
+/*
+ * _copyGatherMerge
+ */
+static GatherMerge *
+_copyGatherMerge(const GatherMerge *from)
+{
+	GatherMerge	   *newnode = makeNode(GatherMerge);
+
+	/*
+	 * copy node superclass fields
+	 */
+	CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+	/*
+	 * copy remainder of node
+	 */
+	COPY_SCALAR_FIELD(num_workers);
+	COPY_SCALAR_FIELD(numCols);
+	COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
+	COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
+	COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
+
+	return newnode;
+}
 
 /*
  * CopyScanFields
@@ -4594,6 +4619,9 @@ copyObject(const void *from)
 		case T_Gather:
 			retval = _copyGather(from);
 			break;
+		case T_GatherMerge:
+			retval = _copyGatherMerge(from);
+			break;
 		case T_SeqScan:
 			retval = _copySeqScan(from);
 			break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 825a7b2..7418fbe 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -458,6 +458,35 @@ _outGather(StringInfo str, const Gather *node)
 }
 
 static void
+_outGatherMerge(StringInfo str, const GatherMerge *node)
+{
+	int		i;
+
+	WRITE_NODE_TYPE("GATHERMERGE");
+
+	_outPlanInfo(str, (const Plan *) node);
+
+	WRITE_INT_FIELD(num_workers);
+	WRITE_INT_FIELD(numCols);
+
+	appendStringInfoString(str, " :sortColIdx");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %d", node->sortColIdx[i]);
+
+	appendStringInfoString(str, " :sortOperators");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->sortOperators[i]);
+
+	appendStringInfoString(str, " :collations");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %u", node->collations[i]);
+
+	appendStringInfoString(str, " :nullsFirst");
+	for (i = 0; i < node->numCols; i++)
+		appendStringInfo(str, " %s", booltostr(node->nullsFirst[i]));
+}
+
+static void
 _outScan(StringInfo str, const Scan *node)
 {
 	WRITE_NODE_TYPE("SCAN");
@@ -2017,6 +2046,17 @@ _outLimitPath(StringInfo str, const LimitPath *node)
 }
 
 static void
+_outGatherMergePath(StringInfo str, const GatherMergePath *node)
+{
+	WRITE_NODE_TYPE("GATHERMERGEPATH");
+
+	_outPathInfo(str, (const Path *) node);
+
+	WRITE_NODE_FIELD(subpath);
+	WRITE_INT_FIELD(num_workers);
+}
+
+static void
 _outNestPath(StringInfo str, const NestPath *node)
 {
 	WRITE_NODE_TYPE("NESTPATH");
@@ -3473,6 +3513,9 @@ outNode(StringInfo str, const void *obj)
 			case T_Gather:
 				_outGather(str, obj);
 				break;
+			case T_GatherMerge:
+				_outGatherMerge(str, obj);
+				break;
 			case T_Scan:
 				_outScan(str, obj);
 				break;
@@ -3809,6 +3852,9 @@ outNode(StringInfo str, const void *obj)
 			case T_LimitPath:
 				_outLimitPath(str, obj);
 				break;
+			case T_GatherMergePath:
+				_outGatherMergePath(str, obj);
+				break;
 			case T_NestPath:
 				_outNestPath(str, obj);
 				break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 8f39d93..d3bbc02 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2138,6 +2138,26 @@ _readGather(void)
 }
 
 /*
+ * _readGatherMerge
+ */
+static GatherMerge *
+_readGatherMerge(void)
+{
+	READ_LOCALS(GatherMerge);
+
+	ReadCommonPlan(&local_node->plan);
+
+	READ_INT_FIELD(num_workers);
+	READ_INT_FIELD(numCols);
+	READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols);
+	READ_OID_ARRAY(sortOperators, local_node->numCols);
+	READ_OID_ARRAY(collations, local_node->numCols);
+	READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
+
+	READ_DONE();
+}
+
+/*
  * _readHash
  */
 static Hash *
@@ -2577,6 +2597,8 @@ parseNodeString(void)
 		return_value = _readUnique();
 	else if (MATCH("GATHER", 6))
 		return_value = _readGather();
+	else if (MATCH("GATHERMERGE", 11))
+		return_value = _readGatherMerge();
 	else if (MATCH("HASH", 4))
 		return_value = _readHash();
 	else if (MATCH("SETOP", 5))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index fbb2cda..b263359 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2084,39 +2084,51 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
 
 /*
  * generate_gather_paths
- *		Generate parallel access paths for a relation by pushing a Gather on
- *		top of a partial path.
+ *		Generate parallel access paths for a relation by pushing a Gather or
+ *		Gather Merge on top of a partial path.
  *
  * This must not be called until after we're done creating all partial paths
  * for the specified relation.  (Otherwise, add_partial_path might delete a
- * path that some GatherPath has a reference to.)
+ * path that some GatherPath or GatherMergePath has a reference to.)
  */
 void
 generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
+	ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
 		return;
 
 	/*
-	 * The output of Gather is currently always unsorted, so there's only one
-	 * partial path of interest: the cheapest one.  That will be the one at
-	 * the front of partial_pathlist because of the way add_partial_path
-	 * works.
-	 *
-	 * Eventually, we should have a Gather Merge operation that can merge
-	 * multiple tuple streams together while preserving their ordering.  We
-	 * could usefully generate such a path from each partial path that has
-	 * non-NIL pathkeys.
+	 * The output of Gather is always unsorted, so there's only one partial
+	 * path of interest: the cheapest one.  That will be the one at the front
+	 * of partial_pathlist because of the way add_partial_path works.
 	 */
 	cheapest_partial_path = linitial(rel->partial_pathlist);
 	simple_gather_path = (Path *)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
+
+	/*
+	 * For each useful ordering, we can consider an order-preserving Gather
+	 * Merge.
+	 */
+	foreach (lc, rel->partial_pathlist)
+	{
+		Path   *subpath = (Path *) lfirst(lc);
+		GatherMergePath   *path;
+
+		if (subpath->pathkeys == NIL)
+			continue;
+
+		path = create_gather_merge_path(root, rel, subpath, rel->reltarget,
+										subpath->pathkeys, NULL, NULL);
+		add_path(rel, &path->path);
+	}
 }
 
 /*
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 627e3f1..e78f3a8 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -126,6 +126,7 @@ bool		enable_nestloop = true;
 bool		enable_material = true;
 bool		enable_mergejoin = true;
 bool		enable_hashjoin = true;
+bool		enable_gathermerge = true;
 
 typedef struct
 {
@@ -373,6 +374,73 @@ cost_gather(GatherPath *path, PlannerInfo *root,
 }
 
 /*
+ * cost_gather_merge
+ *	  Determines and returns the cost of gather merge path.
+ *
+ * GatherMerge merges several pre-sorted input streams, using a heap that at
+ * any given instant holds the next tuple from each stream. If there are N
+ * streams, we need about N*log2(N) tuple comparisons to construct the heap at
+ * startup, and then for each output tuple, about log2(N) comparisons to
+ * replace the top heap entry with the next tuple from the same stream.
+ */
+void
+cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+				  RelOptInfo *rel, ParamPathInfo *param_info,
+				  Cost input_startup_cost, Cost input_total_cost,
+				  double *rows)
+{
+	Cost		startup_cost = 0;
+	Cost		run_cost = 0;
+	Cost		comparison_cost;
+	double		N;
+	double		logN;
+
+	/* Mark the path with the correct row estimate */
+	if (rows)
+		path->path.rows = *rows;
+	else if (param_info)
+		path->path.rows = param_info->ppi_rows;
+	else
+		path->path.rows = rel->rows;
+
+	if (!enable_gathermerge)
+		startup_cost += disable_cost;
+
+	/*
+	 * Add one to the number of workers to account for the leader.  This might
+	 * be overgenerous since the leader will do less work than other workers
+	 * in typical cases, but we'll go with it for now.
+	 */
+	Assert(path->num_workers > 0);
+	N = (double) path->num_workers + 1;
+	logN = LOG2(N);
+
+	/* Assumed cost per tuple comparison */
+	comparison_cost = 2.0 * cpu_operator_cost;
+
+	/* Heap creation cost */
+	startup_cost += comparison_cost * N * logN;
+
+	/* Per-tuple heap maintenance cost */
+	run_cost += path->path.rows * comparison_cost * logN;
+
+	/* small cost for heap management, like cost_merge_append */
+	run_cost += cpu_operator_cost * path->path.rows;
+
+	/*
+	 * Parallel setup and communication cost.  Since Gather Merge, unlike
+	 * Gather, requires us to block until a tuple is available from every
+	 * worker, we bump the IPC cost up a little bit as compared with Gather.
+	 * For lack of a better idea, charge an extra 5%.
+	 */
+	startup_cost += parallel_setup_cost;
+	run_cost += parallel_tuple_cost * path->path.rows * 1.05;
+
+	path->path.startup_cost = startup_cost + input_startup_cost;
+	path->path.total_cost = (startup_cost + run_cost + input_total_cost);
+}
+
+/*
  * cost_index
  *	  Determines and returns the cost of scanning a relation using an index.
  *
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 8f8663c..e18c634 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -277,6 +277,8 @@ static ModifyTable *make_modifytable(PlannerInfo *root,
 				 List *resultRelations, List *subplans,
 				 List *withCheckOptionLists, List *returningLists,
 				 List *rowMarks, OnConflictExpr *onconflict, int epqParam);
+static GatherMerge *create_gather_merge_plan(PlannerInfo *root,
+						 GatherMergePath *best_path);
 
 
 /*
@@ -475,6 +477,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
 											  (LimitPath *) best_path,
 											  flags);
 			break;
+		case T_GatherMerge:
+			plan = (Plan *) create_gather_merge_plan(root,
+											  (GatherMergePath *) best_path);
+			break;
 		default:
 			elog(ERROR, "unrecognized node type: %d",
 				 (int) best_path->pathtype);
@@ -1452,6 +1458,86 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path)
 }
 
 /*
+ * create_gather_merge_plan
+ *
+ *	  Create a Gather Merge plan for 'best_path' and (recursively)
+ *	  plans for its subpaths.
+ */
+static GatherMerge *
+create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
+{
+	GatherMerge *gm_plan;
+	Plan	   *subplan;
+	List	   *pathkeys = best_path->path.pathkeys;
+	int			numsortkeys;
+	AttrNumber *sortColIdx;
+	Oid		   *sortOperators;
+	Oid		   *collations;
+	bool	   *nullsFirst;
+
+	/* As with Gather, it's best to project away columns in the workers. */
+	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
+
+	/* See create_merge_append_plan for why there's no make_xxx function */
+	gm_plan = makeNode(GatherMerge);
+	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->num_workers = best_path->num_workers;
+	copy_generic_path_info(&gm_plan->plan, &best_path->path);
+
+	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
+	Assert(pathkeys != NIL);
+
+	/* Compute sort column info, and adjust GatherMerge tlist as needed */
+	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
+									  best_path->path.parent->relids,
+									  NULL,
+									  true,
+									  &gm_plan->numCols,
+									  &gm_plan->sortColIdx,
+									  &gm_plan->sortOperators,
+									  &gm_plan->collations,
+									  &gm_plan->nullsFirst);
+
+
+	/* Compute sort column info, and adjust subplan's tlist as needed */
+	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
+										 best_path->subpath->parent->relids,
+										 gm_plan->sortColIdx,
+										 false,
+										 &numsortkeys,
+										 &sortColIdx,
+										 &sortOperators,
+										 &collations,
+										 &nullsFirst);
+
+	/* As for MergeAppend, check that we got the same sort key information. */
+	Assert(numsortkeys == gm_plan->numCols);
+	if (memcmp(sortColIdx, gm_plan->sortColIdx,
+			   numsortkeys * sizeof(AttrNumber)) != 0)
+		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
+	Assert(memcmp(sortOperators, gm_plan->sortOperators,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(collations, gm_plan->collations,
+				  numsortkeys * sizeof(Oid)) == 0);
+	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
+				  numsortkeys * sizeof(bool)) == 0);
+
+	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
+	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
+		subplan = (Plan *) make_sort(subplan, numsortkeys,
+									 sortColIdx, sortOperators,
+									 collations, nullsFirst);
+
+	/* Now insert the subplan under GatherMerge. */
+	gm_plan->plan.lefttree = subplan;
+
+	/* use parallel mode for parallel plans. */
+	root->glob->parallelModeNeeded = true;
+
+	return gm_plan;
+}
+
+/*
  * create_projection_plan
  *
  *	  Create a plan tree to do a projection step and (recursively) plans
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 1636a69..209f769 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3663,8 +3663,7 @@ create_grouping_paths(PlannerInfo *root,
 
 		/*
 		 * Now generate a complete GroupAgg Path atop of the cheapest partial
-		 * path. We need only bother with the cheapest path here, as the
-		 * output of Gather is never sorted.
+		 * path.  We can do this using either Gather or Gather Merge.
 		 */
 		if (grouped_rel->partial_pathlist)
 		{
@@ -3711,6 +3710,70 @@ create_grouping_paths(PlannerInfo *root,
 										   parse->groupClause,
 										   (List *) parse->havingQual,
 										   dNumGroups));
+
+			/*
+			 * The point of using Gather Merge rather than Gather is that it
+			 * can preserve the ordering of the input path, so there's no
+			 * reason to try it unless (1) it's possible to produce more than
+			 * one output row and (2) we want the output path to be ordered.
+			 */
+			if (parse->groupClause != NIL && root->group_pathkeys != NIL)
+			{
+				foreach(lc, grouped_rel->partial_pathlist)
+				{
+					Path	   *subpath = (Path *) lfirst(lc);
+					Path	   *gmpath;
+					double		total_groups;
+
+					/*
+					 * It's useful to consider paths that are already properly
+					 * ordered for Gather Merge, because those don't need a
+					 * sort.  It's also useful to consider the cheapest path,
+					 * because sorting it in parallel and then doing Gather
+					 * Merge may be better than doing an unordered Gather
+					 * followed by a sort.  But there's no point in
+					 * considering non-cheapest paths that aren't already
+					 * sorted correctly.
+					 */
+					if (path != subpath &&
+						!pathkeys_contained_in(root->group_pathkeys,
+											   subpath->pathkeys))
+						continue;
+
+					total_groups = subpath->rows * subpath->parallel_workers;
+
+					gmpath = (Path *)
+						create_gather_merge_path(root,
+												 grouped_rel,
+												 subpath,
+												 NULL,
+												 root->group_pathkeys,
+												 NULL,
+												 &total_groups);
+
+					if (parse->hasAggs)
+						add_path(grouped_rel, (Path *)
+								 create_agg_path(root,
+												 grouped_rel,
+												 gmpath,
+												 target,
+								 parse->groupClause ? AGG_SORTED : AGG_PLAIN,
+												 AGGSPLIT_FINAL_DESERIAL,
+												 parse->groupClause,
+												 (List *) parse->havingQual,
+												 &agg_final_costs,
+												 dNumGroups));
+					else
+						add_path(grouped_rel, (Path *)
+								 create_group_path(root,
+												   grouped_rel,
+												   gmpath,
+												   target,
+												   parse->groupClause,
+												   (List *) parse->havingQual,
+												   dNumGroups));
+				}
+			}
 		}
 	}
 
@@ -3808,6 +3871,16 @@ create_grouping_paths(PlannerInfo *root,
 	/* Now choose the best path(s) */
 	set_cheapest(grouped_rel);
 
+	/*
+	 * We've been using the partial pathlist for the grouped relation to hold
+	 * partially aggregated paths, but that's actually a little bit bogus
+	 * because it's unsafe for later planning stages -- like ordered_rel ---
+	 * to get the idea that they can use these partial paths as if they didn't
+	 * need a FinalizeAggregate step.  Zap the partial pathlist at this stage
+	 * so we don't get confused.
+	 */
+	grouped_rel->partial_pathlist = NIL;
+
 	return grouped_rel;
 }
 
@@ -4276,6 +4349,56 @@ create_ordered_paths(PlannerInfo *root,
 	}
 
 	/*
+	 * generate_gather_paths() will have already generated a simple Gather
+	 * path for the best parallel path, if any, and the loop above will have
+	 * considered sorting it.  Similarly, generate_gather_paths() will also
+	 * have generated order-preserving Gather Merge plans which can be used
+	 * without sorting if they happen to match the sort_pathkeys, and the loop
+	 * above will have handled those as well.  However, there's one more
+	 * possibility: it may make sense to sort the cheapest partial path
+	 * according to the required output order and then use Gather Merge.
+	 */
+	if (ordered_rel->consider_parallel && root->sort_pathkeys != NIL &&
+		input_rel->partial_pathlist != NIL)
+	{
+		Path	   *cheapest_partial_path;
+
+		cheapest_partial_path = linitial(input_rel->partial_pathlist);
+
+		/*
+		 * If cheapest partial path doesn't need a sort, this is redundant
+		 * with what's already been tried.
+		 */
+		if (!pathkeys_contained_in(root->sort_pathkeys,
+								   cheapest_partial_path->pathkeys))
+		{
+			Path	   *path;
+			double		total_groups;
+
+			path = (Path *) create_sort_path(root,
+											 ordered_rel,
+											 cheapest_partial_path,
+											 root->sort_pathkeys,
+											 limit_tuples);
+
+			total_groups = cheapest_partial_path->rows *
+				cheapest_partial_path->parallel_workers;
+			path = (Path *)
+				create_gather_merge_path(root, ordered_rel,
+										 path,
+										 target, root->sort_pathkeys, NULL,
+										 &total_groups);
+
+			/* Add projection step if needed */
+			if (path->pathtarget != target)
+				path = apply_projection_to_path(root, ordered_rel,
+												path, target);
+
+			add_path(ordered_rel, path);
+		}
+	}
+
+	/*
 	 * If there is an FDW that's responsible for all baserels of the query,
 	 * let it consider adding ForeignPaths.
 	 */
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 3d2c124..5f3027e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -616,6 +616,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
 			break;
 
 		case T_Gather:
+		case T_GatherMerge:
 			set_upper_references(root, plan, rtoffset);
 			break;
 
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index da9a84b..6fa6540 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2700,6 +2700,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
 		case T_Sort:
 		case T_Unique:
 		case T_Gather:
+		case T_GatherMerge:
 		case T_SetOp:
 		case T_Group:
 			/* no node-type-specific fields need fixing */
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 0d925c6..8ce772d 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -1628,6 +1628,66 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
 }
 
 /*
+ * create_gather_merge_path
+ *
+ *	  Creates a path corresponding to a gather merge scan, returning
+ *	  the pathnode.
+ */
+GatherMergePath *
+create_gather_merge_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,
+						 PathTarget *target, List *pathkeys,
+						 Relids required_outer, double *rows)
+{
+	GatherMergePath *pathnode = makeNode(GatherMergePath);
+	Cost			 input_startup_cost = 0;
+	Cost			 input_total_cost = 0;
+
+	Assert(subpath->parallel_safe);
+	Assert(pathkeys);
+
+	pathnode->path.pathtype = T_GatherMerge;
+	pathnode->path.parent = rel;
+	pathnode->path.param_info = get_baserel_parampathinfo(root, rel,
+														  required_outer);
+	pathnode->path.parallel_aware = false;
+
+	pathnode->subpath = subpath;
+	pathnode->num_workers = subpath->parallel_workers;
+	pathnode->path.pathkeys = pathkeys;
+	pathnode->path.pathtarget = target ? target : rel->reltarget;
+	pathnode->path.rows += subpath->rows;
+
+	if (pathkeys_contained_in(pathkeys, subpath->pathkeys))
+	{
+		/* Subpath is adequately ordered, we won't need to sort it */
+		input_startup_cost += subpath->startup_cost;
+		input_total_cost += subpath->total_cost;
+	}
+	else
+	{
+		/* We'll need to insert a Sort node, so include cost for that */
+		Path		sort_path;		/* dummy for result of cost_sort */
+
+		cost_sort(&sort_path,
+				  root,
+				  pathkeys,
+				  subpath->total_cost,
+				  subpath->rows,
+				  subpath->pathtarget->width,
+				  0.0,
+				  work_mem,
+				  -1);
+		input_startup_cost += sort_path.startup_cost;
+		input_total_cost += sort_path.total_cost;
+	}
+
+	cost_gather_merge(pathnode, root, rel, pathnode->path.param_info,
+					  input_startup_cost, input_total_cost, rows);
+
+	return pathnode;
+}
+
+/*
  * translate_sub_tlist - get subquery column numbers represented by tlist
  *
  * The given targetlist usually contains only Vars referencing the given relid.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index f8b073d..811ea51 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -902,6 +902,15 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+	{
+		{"enable_gathermerge", PGC_USERSET, QUERY_TUNING_METHOD,
+			gettext_noop("Enables the planner's use of gather merge plans."),
+			NULL
+		},
+		&enable_gathermerge,
+		true,
+		NULL, NULL, NULL
+	},
 
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h
new file mode 100644
index 0000000..3c8b42b
--- /dev/null
+++ b/src/include/executor/nodeGatherMerge.h
@@ -0,0 +1,27 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeGatherMerge.h
+ *		prototypes for nodeGatherMerge.c
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeGatherMerge.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODEGATHERMERGE_H
+#define NODEGATHERMERGE_H
+
+#include "nodes/execnodes.h"
+
+extern GatherMergeState *ExecInitGatherMerge(GatherMerge * node,
+					EState *estate,
+					int eflags);
+extern TupleTableSlot *ExecGatherMerge(GatherMergeState * node);
+extern void ExecEndGatherMerge(GatherMergeState * node);
+extern void ExecReScanGatherMerge(GatherMergeState * node);
+extern void ExecShutdownGatherMerge(GatherMergeState * node);
+
+#endif   /* NODEGATHERMERGE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 6a0d590..f856f60 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -2095,6 +2095,35 @@ typedef struct GatherState
 } GatherState;
 
 /* ----------------
+ * GatherMergeState information
+ *
+ *		Gather merge nodes launch 1 or more parallel workers, run a
+ *		subplan which produces sorted output in each worker, and then
+ *		merge the results into a single sorted stream.
+ * ----------------
+ */
+struct GMReaderTuple;
+
+typedef struct GatherMergeState
+{
+	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
+	struct ParallelExecutorInfo *pei;
+	int			nreaders;
+	int			nworkers_launched;
+	struct TupleQueueReader **reader;
+	TupleDesc	tupDesc;
+	TupleTableSlot **gm_slots;
+	struct binaryheap *gm_heap; /* binary heap of slot indices */
+	bool		gm_initialized; /* gather merge initilized ? */
+	bool		need_to_scan_locally;
+	int			gm_nkeys;
+	SortSupport gm_sortkeys;	/* array of length ms_nkeys */
+	struct GMReaderTupleBuffer *gm_tuple_buffers;		/* tuple buffer per
+														 * reader */
+} GatherMergeState;
+
+/* ----------------
  *	 HashState information
  * ----------------
  */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 49fa944..2bc7a5d 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -77,6 +77,7 @@ typedef enum NodeTag
 	T_WindowAgg,
 	T_Unique,
 	T_Gather,
+	T_GatherMerge,
 	T_Hash,
 	T_SetOp,
 	T_LockRows,
@@ -127,6 +128,7 @@ typedef enum NodeTag
 	T_WindowAggState,
 	T_UniqueState,
 	T_GatherState,
+	T_GatherMergeState,
 	T_HashState,
 	T_SetOpState,
 	T_LockRowsState,
@@ -249,6 +251,7 @@ typedef enum NodeTag
 	T_MaterialPath,
 	T_UniquePath,
 	T_GatherPath,
+	T_GatherMergePath,
 	T_ProjectionPath,
 	T_ProjectSetPath,
 	T_SortPath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 7fbb0c2..b880dc1 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -797,6 +797,22 @@ typedef struct Gather
 	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
+/* ------------
+ *		gather merge node
+ * ------------
+ */
+typedef struct GatherMerge
+{
+	Plan		plan;
+	int			num_workers;
+	/* remaining fields are just like the sort-key info in struct Sort */
+	int			numCols;		/* number of sort-key columns */
+	AttrNumber *sortColIdx;		/* their indexes in the target list */
+	Oid		   *sortOperators;	/* OIDs of operators to sort them by */
+	Oid		   *collations;		/* OIDs of collations */
+	bool	   *nullsFirst;		/* NULLS FIRST/LAST directions */
+} GatherMerge;
+
 /* ----------------
  *		hash build node
  *
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index f7ac6f6..05d6f07 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1204,6 +1204,19 @@ typedef struct GatherPath
 } GatherPath;
 
 /*
+ * GatherMergePath runs several copies of a plan in parallel and
+ * collects the results. For gather merge parallel leader always execute the
+ * plan.
+ */
+typedef struct GatherMergePath
+{
+	Path		path;
+	Path	   *subpath;		/* path for each worker */
+	int			num_workers;	/* number of workers sought to help */
+} GatherMergePath;
+
+
+/*
  * All join-type paths share these fields.
  */
 
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 2b38683..d9a9b12 100644
--- a/src/include/optimizer/cost.h
+++ b/src/include/optimizer/cost.h
@@ -66,6 +66,7 @@ extern bool enable_nestloop;
 extern bool enable_material;
 extern bool enable_mergejoin;
 extern bool enable_hashjoin;
+extern bool enable_gathermerge;
 extern int	constraint_exclusion;
 
 extern double clamp_row_est(double nrows);
@@ -205,5 +206,9 @@ extern Selectivity clause_selectivity(PlannerInfo *root,
 				   int varRelid,
 				   JoinType jointype,
 				   SpecialJoinInfo *sjinfo);
+extern void cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
+							  RelOptInfo *rel, ParamPathInfo *param_info,
+							  Cost input_startup_cost, Cost input_total_cost,
+							  double *rows);
 
 #endif   /* COST_H */
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index f0fe830..373c722 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -78,6 +78,13 @@ extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel,
 extern GatherPath *create_gather_path(PlannerInfo *root,
 				   RelOptInfo *rel, Path *subpath, PathTarget *target,
 				   Relids required_outer, double *rows);
+extern GatherMergePath *create_gather_merge_path(PlannerInfo *root,
+												 RelOptInfo *rel,
+												 Path *subpath,
+												 PathTarget *target,
+												 List *pathkeys,
+												 Relids required_outer,
+												 double *rows);
 extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root,
 						 RelOptInfo *rel, Path *subpath,
 						 List *pathkeys, Relids required_outer);
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 290b735..038a62e 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -213,6 +213,33 @@ select  count(*) from tenk1, tenk2 where tenk1.unique1 = tenk2.unique1;
 
 reset enable_hashjoin;
 reset enable_nestloop;
+--test gather merge
+set enable_hashagg to off;
+explain (costs off)
+   select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+                     QUERY PLAN                     
+----------------------------------------------------
+ Finalize GroupAggregate
+   Group Key: string4
+   ->  Gather Merge
+         Workers Planned: 4
+         ->  Partial GroupAggregate
+               Group Key: string4
+               ->  Sort
+                     Sort Key: string4
+                     ->  Parallel Seq Scan on tenk1
+(9 rows)
+
+select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+ string4 | count 
+---------+-------
+ AAAAxx  |  2500
+ HHHHxx  |  2500
+ OOOOxx  |  2500
+ VVVVxx  |  2500
+(4 rows)
+
+reset enable_hashagg;
 set force_parallel_mode=1;
 explain (costs off)
   select stringu1::int2 from tenk1 where unique1 = 1;
diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out
index d48abd7..568b783 100644
--- a/src/test/regress/expected/sysviews.out
+++ b/src/test/regress/expected/sysviews.out
@@ -73,6 +73,7 @@ select name, setting from pg_settings where name like 'enable%';
          name         | setting 
 ----------------------+---------
  enable_bitmapscan    | on
+ enable_gathermerge   | on
  enable_hashagg       | on
  enable_hashjoin      | on
  enable_indexonlyscan | on
@@ -83,7 +84,7 @@ select name, setting from pg_settings where name like 'enable%';
  enable_seqscan       | on
  enable_sort          | on
  enable_tidscan       | on
-(11 rows)
+(12 rows)
 
 -- Test that the pg_timezone_names and pg_timezone_abbrevs views are
 -- more-or-less working.  We can't test their contents in any great detail
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index 80412b9..9311a77 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -84,6 +84,17 @@ select  count(*) from tenk1, tenk2 where tenk1.unique1 = tenk2.unique1;
 
 reset enable_hashjoin;
 reset enable_nestloop;
+
+--test gather merge
+set enable_hashagg to off;
+
+explain (costs off)
+   select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+
+select  string4, count((unique2)) from tenk1 group by string4 order by string4;
+
+reset enable_hashagg;
+
 set force_parallel_mode=1;
 
 explain (costs off)
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3155ec6..296552e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -779,6 +779,9 @@ GV
 Gather
 GatherPath
 GatherState
+GatherMerge
+GatherMergePath
+GatherMergeState
 Gene
 GenericCosts
 GenericExprState
#50Robert Haas
robertmhaas@gmail.com
In reply to: Rushabh Lathia (#49)
Re: Gather Merge

On Wed, Mar 8, 2017 at 11:59 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Here is another version of patch with the suggested changes.

Committed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#51Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Robert Haas (#50)
2 attachment(s)
Re: Gather Merge

On Thu, Mar 9, 2017 at 6:19 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Mar 8, 2017 at 11:59 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Here is another version of patch with the suggested changes.

Committed.

Thanks Robert for committing this.

My colleague Neha Sharma found one regression with the patch. I was about
to send this mail and noticed that you committed the patch.

Here is the small example:

Test setup:

1) ./db/bin/pgbench postgres -i -F 100 -s 20

2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;

3) vacuum analyze pgbench_accounts

4)

postgres=# set max_parallel_workers_per_gather = 4;
SET

postgres=# explain select aid from pgbench_accounts where aid % 25= 0 group
by aid;
ERROR: ORDER/GROUP BY expression not found in targetlist

postgres=# set enable_indexonlyscan = off;
SET
postgres=# explain select aid from pgbench_accounts where aid % 25= 0 group
by aid;
QUERY
PLAN
--------------------------------------------------------------------------------------------------------
Group (cost=44708.21..45936.81 rows=10001 width=4)
Group Key: aid
-> Gather Merge (cost=44708.21..45911.81 rows=10000 width=0)
Workers Planned: 4
-> Group (cost=43708.15..43720.65 rows=2500 width=4)
Group Key: aid
-> Sort (cost=43708.15..43714.40 rows=2500 width=4)
Sort Key: aid
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..43567.06 rows=2500 width=4)
Filter: ((aid % 25) = 0)
(10 rows)

- Index only scan with GM do work - but with ORDER BY clause
postgres=# set enable_seqscan = off;
SET
postgres=# explain select aid from pgbench_accounts where aid % 25= 0 order
by aid;
QUERY
PLAN
---------------------------------------------------------------------------------------------------------------------
Gather Merge (cost=1000.49..113924.61 rows=10001 width=4)
Workers Planned: 4
-> Parallel Index Scan using pgbench_accounts_pkey on pgbench_accounts
(cost=0.43..111733.33 rows=2500 width=4)
Filter: ((aid % 25) = 0)
(4 rows)

Debugging further I found that problem only when IndexOnlyScan under GM and
that to only with grouping. Debugging problem I found that ressortgroupref
is
not getting set. That lead me thought that it might be because
create_gather_merge_plan()
is building tlist, with build_path_tlist. Another problem I found is that
create_grouping_paths() is passing NULL for the targets while calling
create_gather_merge_path(). (fix_target_gm.patch)

With those changes above test is running fine, but it broke other things.
Like

postgres=# explain select distinct(bid) from pgbench_accounts where filler
like '%foo%' group by aid;
ERROR: GatherMerge child's targetlist doesn't match GatherMerge

Another thing I noticed that, if we stop adding gathermerge during the
generate_gather_paths() then all the test working fine.
(comment_gm_from_generate_gather.patch)

I will continue looking at this problem.

Attaching both the patch for reference.

regards,
Rushabh Lathia
www.EnterpriseDB.com

Attachments:

comment_gm_from_generate_gather.patchapplication/x-download; name=comment_gm_from_generate_gather.patchDownload
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index b263359..f658c8b 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2096,7 +2096,7 @@ generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 {
 	Path	   *cheapest_partial_path;
 	Path	   *simple_gather_path;
-	ListCell   *lc;
+	//ListCell   *lc;
 
 	/* If there are no partial paths, there's nothing to do here. */
 	if (rel->partial_pathlist == NIL)
@@ -2112,7 +2112,7 @@ generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 		create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,
 						   NULL, NULL);
 	add_path(rel, simple_gather_path);
-
+#if 0
 	/*
 	 * For each useful ordering, we can consider an order-preserving Gather
 	 * Merge.
@@ -2129,6 +2129,7 @@ generate_gather_paths(PlannerInfo *root, RelOptInfo *rel)
 										subpath->pathkeys, NULL, NULL);
 		add_path(rel, &path->path);
 	}
+#endif
 }
 
 /*
fix_target_gm.patchapplication/x-download; name=fix_target_gm.patchDownload
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index e18c634..bc5690c 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1474,13 +1474,15 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
 	Oid		   *sortOperators;
 	Oid		   *collations;
 	bool	   *nullsFirst;
+	List	   *tlist = build_path_tlist(root, &best_path->path);
+
 
 	/* As with Gather, it's best to project away columns in the workers. */
 	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
 
 	/* See create_merge_append_plan for why there's no make_xxx function */
 	gm_plan = makeNode(GatherMerge);
-	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->plan.targetlist = tlist;
 	gm_plan->num_workers = best_path->num_workers;
 	copy_generic_path_info(&gm_plan->plan, &best_path->path);
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 209f769..82a716c 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3746,7 +3746,7 @@ create_grouping_paths(PlannerInfo *root,
 						create_gather_merge_path(root,
 												 grouped_rel,
 												 subpath,
-												 NULL,
+												 target,
 												 root->group_pathkeys,
 												 NULL,
 												 &total_groups);
#52Robert Haas
robertmhaas@gmail.com
In reply to: Rushabh Lathia (#51)
Re: Gather Merge

On Thu, Mar 9, 2017 at 8:21 AM, Rushabh Lathia <rushabh.lathia@gmail.com> wrote:

Thanks Robert for committing this.

My colleague Neha Sharma found one regression with the patch. I was about
to send this mail and noticed that you committed the patch.

Oops. Bad timing.

postgres=# explain select aid from pgbench_accounts where aid % 25= 0 group
by aid;
ERROR: ORDER/GROUP BY expression not found in targetlist

I think your fix for this looks right, although I would write it this way:

-    gm_plan->plan.targetlist = subplan->targetlist;
+    gm_plan->plan.targetlist = build_path_tlist(root, &best_path->path);

The second part of your fix looks wrong. I think you want this:

                         create_gather_merge_path(root,
                                                  grouped_rel,
                                                  subpath,
-                                                 NULL,
+                                                 partial_grouping_target,
                                                  root->group_pathkeys,
                                                  NULL,
                                                  &total_groups);

That will match the create_gather_path case.

This test case is still failing for me even with those fixes:

rhaas=# select aid+1 from pgbench_accounts group by aid+1;
ERROR: could not find pathkey item to sort

So evidently there is at least one more bug here.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#53Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Robert Haas (#52)
1 attachment(s)
Re: Gather Merge

On Thu, Mar 9, 2017 at 9:42 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 9, 2017 at 8:21 AM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

Thanks Robert for committing this.

My colleague Neha Sharma found one regression with the patch. I was about
to send this mail and noticed that you committed the patch.

Oops. Bad timing.

postgres=# explain select aid from pgbench_accounts where aid % 25= 0

group

by aid;
ERROR: ORDER/GROUP BY expression not found in targetlist

I think your fix for this looks right, although I would write it this way:

-    gm_plan->plan.targetlist = subplan->targetlist;
+    gm_plan->plan.targetlist = build_path_tlist(root, &best_path->path);

The second part of your fix looks wrong. I think you want this:

create_gather_merge_path(root,
grouped_rel,
subpath,
-                                                 NULL,
+                                                 partial_grouping_target,
root->group_pathkeys,
NULL,
&total_groups);

That will match the create_gather_path case.

Right, I did that change and perform the test with the fix and I don't
see any regression now.

This test case is still failing for me even with those fixes:

rhaas=# select aid+1 from pgbench_accounts group by aid+1;
ERROR: could not find pathkey item to sort

I don't see this failure with the patch. Even I forced the gather merge
in the above query and that just working fine.

Attaching patch, with the discussed changes.

Thanks,
Rushabh Lathia

Attachments:

fix_target_gm_v2.patchapplication/x-download; name=fix_target_gm_v2.patchDownload
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index e18c634..d002e6d 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1474,13 +1474,14 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
 	Oid		   *sortOperators;
 	Oid		   *collations;
 	bool	   *nullsFirst;
+	List	   *tlist = build_path_tlist(root, &best_path->path);
 
 	/* As with Gather, it's best to project away columns in the workers. */
 	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
 
 	/* See create_merge_append_plan for why there's no make_xxx function */
 	gm_plan = makeNode(GatherMerge);
-	gm_plan->plan.targetlist = subplan->targetlist;
+	gm_plan->plan.targetlist = tlist;
 	gm_plan->num_workers = best_path->num_workers;
 	copy_generic_path_info(&gm_plan->plan, &best_path->path);
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 209f769..02286d9 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -3746,7 +3746,7 @@ create_grouping_paths(PlannerInfo *root,
 						create_gather_merge_path(root,
 												 grouped_rel,
 												 subpath,
-												 NULL,
+												 partial_grouping_target,
 												 root->group_pathkeys,
 												 NULL,
 												 &total_groups);
#54Robert Haas
robertmhaas@gmail.com
In reply to: Rushabh Lathia (#53)
Re: Gather Merge

On Thu, Mar 9, 2017 at 11:25 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

I don't see this failure with the patch. Even I forced the gather merge
in the above query and that just working fine.

Attaching patch, with the discussed changes.

Committed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#55Andreas Joseph Krogh
andreas@visena.com
In reply to: Robert Haas (#54)
Re: Gather Merge

På torsdag 09. mars 2017 kl. 18:09:45, skrev Robert Haas <robertmhaas@gmail.com
<mailto:robertmhaas@gmail.com>>:
On Thu, Mar 9, 2017 at 11:25 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

I don't see this failure with the patch. Even I forced the gather merge
in the above query and that just working fine.

Attaching patch, with the discussed changes.

Committed.
 
 
I'm still getting (as of 9c2635e26f6f4e34b3b606c0fc79d0e111953a74): 
ERROR:  GatherMerge child's targetlist doesn't match GatherMerge

 
from this query:
 
EXPLAIN ANALYSE SELECT em.entity_id FROM origo_email_delivery del JOIN
origo_email_message emON (del.message_id = em.entity_id) WHERE 1 = 1 AND
del.owner_id =3 AND ( del.from_entity_id = 279519 OR del.from_entity_id = 3 AND
em.entity_idIN ( SELECT ea_owner.message_id FROM origo_email_address_owner
ea_ownerWHERE ea_owner.recipient_id = 279519 ) ) ORDER BY del.received_timestamp
DESCLIMIT 101 OFFSET 0;
 
Is this known or shall I provide more info/schema etc?
 
If I select del.entity_id, it works:
 
EXPLAIN ANALYSE SELECT del.entity_id FROM origo_email_delivery del JOIN
origo_email_message emON (del.message_id = em.entity_id) WHERE 1 = 1 AND
del.owner_id =3 AND ( del.from_entity_id = 279519 OR del.from_entity_id = 3 AND
em.entity_idIN ( SELECT ea_owner.message_id FROM origo_email_address_owner
ea_ownerWHERE ea_owner.recipient_id = 279519 ) ) ORDER BY del.received_timestamp
DESC LIMIT 101 OFFSET 0;
 
Plan is:
│ Limit  (cost=1259.72..15798.21 rows=101 width=16) (actual
time=152.946..153.269 rows=34 loops=1)
                                                                                                            │
│   ->  Gather Merge  (cost=1259.72..3139414.43 rows=21801 width=16) (actual
time=152.945..153.264 rows=34 loops=1)
                                                                                           │
│         Workers Planned: 4
                                                                                                                                                                                  │
│         Workers Launched: 4
                                                                                                                                                                                 │
│         ->  Nested Loop  (cost=259.66..3135817.66 rows=5450 width=16)
(actual time=95.295..137.549 rows=7 loops=5)
                                                                                          │
│               ->  Parallel Index Scan Backward using
origo_email_received_idx on origo_email_delivery del  (cost=0.42..312163.56
rows=10883 width=32) (actual time=0.175..121.434 rows=6540 loops=5)         │
│                     Filter: ((owner_id = 3) AND ((from_entity_id = 279519)
OR (from_entity_id = 3)))
                                                                                                        │
│                     Rows Removed by Filter: 170355
                                                                                                                                                          │
│               ->  Index Only Scan using origo_email_message_pkey on
origo_email_message em  (cost=259.24..259.45 rows=1 width=8) (actual
time=0.002..0.002 rows=0 loops=32699)                               │
│                     Index Cond: (entity_id = del.message_id)
                                                                                                                                                │
│                     Filter: ((del.from_entity_id = 279519) OR
((del.from_entity_id = 3) AND (hashed SubPlan 1)))
                                                                                            │
│                     Rows Removed by Filter: 1
                                                                                                                                                               │
│                     Heap Fetches: 0
                                                                                                                                                                         │
│                     SubPlan 1
                                                                                                                                                                               │
│                       ->  Index Scan using
origo_email_address_owner_recipient_id_idx on origo_email_address_owner
ea_owner  (cost=0.43..258.64 rows=69 width=8) (actual time=0.032..0.294
rows=175 loops=5) │
│                             Index Cond: (recipient_id = 279519)
                                                                                                                                             │
│ Planning time: 1.372 ms
                                                                                                                                                                                     │
│ Execution time: 170.859 ms
                                                                                                                                                                                  │

 
 
-- Andreas Joseph Krogh
 

#56Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Andreas Joseph Krogh (#55)
Re: Gather Merge

On Fri, Mar 10, 2017 at 1:44 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

På torsdag 09. mars 2017 kl. 18:09:45, skrev Robert Haas <
robertmhaas@gmail.com>:

On Thu, Mar 9, 2017 at 11:25 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

I don't see this failure with the patch. Even I forced the gather merge
in the above query and that just working fine.

Attaching patch, with the discussed changes.

Committed.

I'm still getting (as of 9c2635e26f6f4e34b3b606c0fc79d0e111953a74):
ERROR: GatherMerge child's targetlist doesn't match GatherMerge

from this query:

EXPLAIN ANALYSE SELECT em.entity_idFROM origo_email_delivery del
JOIN origo_email_message em ON (del.message_id = em.entity_id)WHERE 1 = 1 AND del.owner_id = 3 AND (
del.from_entity_id = 279519 OR del.from_entity_id = 3 AND em.entity_id IN (
SELECT ea_owner.message_id
FROM origo_email_address_owner ea_owner
WHERE ea_owner.recipient_id = 279519 )
)
ORDER BY del.received_timestamp DESCLIMIT 101 OFFSET 0;

Is this known or shall I provide more info/schema etc?

Please provide the reproducible test if possible.

If I select del.entity_id, it works:

EXPLAIN ANALYSE SELECT del.entity_id
FROM origo_email_delivery del
JOIN origo_email_message em ON (del.message_id = em.entity_id)
WHERE 1 = 1 AND del.owner_id = 3 AND (
del.from_entity_id = 279519 OR del.from_entity_id = 3 AND em.entity_id IN (
SELECT ea_owner.message_id
FROM origo_email_address_owner ea_owner
WHERE ea_owner.recipient_id = 279519 )
)

ORDER BY del.received_timestamp DESC LIMIT 101 OFFSET 0;

Plan is:
│ Limit (cost=1259.72..15798.21 rows=101 width=16) (actual
time=152.946..153.269 rows=34 loops=1)

│ -> Gather Merge (cost=1259.72..3139414.43 rows=21801 width=16)
(actual time=152.945..153.264 rows=34 loops=1)


│ Workers Planned: 4


│ Workers Launched: 4


│ -> Nested Loop (cost=259.66..3135817.66 rows=5450 width=16)
(actual time=95.295..137.549 rows=7 loops=5)

│ -> Parallel Index Scan Backward using
origo_email_received_idx on origo_email_delivery del (cost=0.42..312163.56
rows=10883 width=32) (actual time=0.175..121.434 rows=6540 loops=5)

│ Filter: ((owner_id = 3) AND ((from_entity_id =
279519) OR (from_entity_id = 3)))


│ Rows Removed by Filter: 170355


│ -> Index Only Scan using origo_email_message_pkey on
origo_email_message em (cost=259.24..259.45 rows=1 width=8) (actual
time=0.002..0.002 rows=0 loops=32699) │
│ Index Cond: (entity_id = del.message_id)


│ Filter: ((del.from_entity_id = 279519) OR
((del.from_entity_id = 3) AND (hashed SubPlan 1)))


│ Rows Removed by Filter: 1


│ Heap Fetches: 0


│ SubPlan 1


│ -> Index Scan using origo_email_address_owner_recipient_id_idx
on origo_email_address_owner ea_owner (cost=0.43..258.64 rows=69 width=8)
(actual time=0.032..0.294 rows=175 loops=5) │
│ Index Cond: (recipient_id = 279519)


│ Planning time: 1.372 ms


│ Execution time: 170.859 ms

--
*Andreas Joseph Krogh*

--
Rushabh Lathia

#57Andreas Joseph Krogh
andreas@visena.com
In reply to: Rushabh Lathia (#56)
Re: Gather Merge

På fredag 10. mars 2017 kl. 09:53:47, skrev Rushabh Lathia <
rushabh.lathia@gmail.com <mailto:rushabh.lathia@gmail.com>>:
    On Fri, Mar 10, 2017 at 1:44 PM, Andreas Joseph Krogh <andreas@visena.com
<mailto:andreas@visena.com>> wrote: På torsdag 09. mars 2017 kl. 18:09:45,
skrev Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>>:
On Thu, Mar 9, 2017 at 11:25 AM, Rushabh Lathia
<rushabh.lathia@gmail.com <mailto:rushabh.lathia@gmail.com>> wrote:

I don't see this failure with the patch. Even I forced the gather merge
in the above query and that just working fine.

Attaching patch, with the discussed changes.

Committed.
 
 
I'm still getting (as of 9c2635e26f6f4e34b3b606c0fc79d0e111953a74): 
ERROR:  GatherMerge child's targetlist doesn't match GatherMerge

 
from this query:
 
EXPLAIN ANALYSE SELECT em.entity_id FROM origo_email_delivery del JOIN
origo_email_message emON (del.message_id = em.entity_id) WHERE 1 = 1 AND
del.owner_id =3 AND ( del.from_entity_id = 279519 OR del.from_entity_id = 3 AND
em.entity_idIN ( SELECT ea_owner.message_id FROM origo_email_address_owner
ea_ownerWHERE ea_owner.recipient_id = 279519 ) ) ORDER BY del.received_timestamp
DESCLIMIT 101 OFFSET 0;
 
Is this known or shall I provide more info/schema etc?
 
Please provide the reproducible test if possible.

 
The execution-plan seems (unsurprisingly) to depend on data-distribution, so
is there a way I can force a GatherMerge?
 
-- Andreas Joseph Krogh

#58Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Andreas Joseph Krogh (#57)
Re: Gather Merge

On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

På fredag 10. mars 2017 kl. 09:53:47, skrev Rushabh Lathia <
rushabh.lathia@gmail.com>:

On Fri, Mar 10, 2017 at 1:44 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

På torsdag 09. mars 2017 kl. 18:09:45, skrev Robert Haas <
robertmhaas@gmail.com>:

On Thu, Mar 9, 2017 at 11:25 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

I don't see this failure with the patch. Even I forced the gather merge
in the above query and that just working fine.

Attaching patch, with the discussed changes.

Committed.

I'm still getting (as of 9c2635e26f6f4e34b3b606c0fc79d0e111953a74):
ERROR: GatherMerge child's targetlist doesn't match GatherMerge

from this query:

EXPLAIN ANALYSE SELECT em.entity_idFROM origo_email_delivery del
JOIN origo_email_message em ON (del.message_id = em.entity_id)WHERE 1 = 1 AND del.owner_id = 3 AND (
del.from_entity_id = 279519 OR del.from_entity_id = 3 AND em.entity_id IN (
SELECT ea_owner.message_id
FROM origo_email_address_owner ea_owner
WHERE ea_owner.recipient_id = 279519 )
)
ORDER BY del.received_timestamp DESCLIMIT 101 OFFSET 0;

Is this known or shall I provide more info/schema etc?

Please provide the reproducible test if possible.

The execution-plan seems (unsurprisingly) to depend on data-distribution,
so is there a way I can force a GatherMerge?

Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable the
other plans.

What plan you are getting now? You not seeing the below error ?

ERROR: GatherMerge child's targetlist doesn't match GatherMerge

--
*Andreas Joseph Krogh*

--
Rushabh Lathia

#59Andreas Joseph Krogh
andreas@visena.com
In reply to: Rushabh Lathia (#58)
Re: Gather Merge

På fredag 10. mars 2017 kl. 10:09:22, skrev Rushabh Lathia <
rushabh.lathia@gmail.com <mailto:rushabh.lathia@gmail.com>>:
    On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <andreas@visena.com
<mailto:andreas@visena.com>> wrote: [...]

The execution-plan seems (unsurprisingly) to depend on data-distribution, so
is there a way I can force a GatherMerge?
 
Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable the
other plans.
 
What plan you are getting now? You not seeing the below error ?
 
ERROR:  GatherMerge child's targetlist doesn't match GatherMerge

 
I'm seeing the same error, it's just that for reproducing it I'd rather not
copy my whole dataset.
 
-- Andreas Joseph Krogh

#60Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Andreas Joseph Krogh (#59)
Re: Gather Merge

On Fri, Mar 10, 2017 at 2:42 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

På fredag 10. mars 2017 kl. 10:09:22, skrev Rushabh Lathia <
rushabh.lathia@gmail.com>:

On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

[...]
The execution-plan seems (unsurprisingly) to depend on data-distribution,
so is there a way I can force a GatherMerge?

Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable the
other plans.

What plan you are getting now? You not seeing the below error ?

ERROR: GatherMerge child's targetlist doesn't match GatherMerge

I'm seeing the same error, it's just that for reproducing it I'd rather
not copy my whole dataset.

Can you share me a schema information, I will try to reproduce at my side?

--
*Andreas Joseph Krogh*

--
Rushabh Lathia

#61Andreas Joseph Krogh
andreas@visena.com
In reply to: Rushabh Lathia (#60)
Re: Gather Merge

På fredag 10. mars 2017 kl. 10:34:48, skrev Rushabh Lathia <
rushabh.lathia@gmail.com <mailto:rushabh.lathia@gmail.com>>:
    On Fri, Mar 10, 2017 at 2:42 PM, Andreas Joseph Krogh <andreas@visena.com
<mailto:andreas@visena.com>> wrote: På fredag 10. mars 2017 kl. 10:09:22, skrev
Rushabh Lathia <rushabh.lathia@gmail.com <mailto:rushabh.lathia@gmail.com>>:
    On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <andreas@visena.com
<mailto:andreas@visena.com>> wrote: [...]

The execution-plan seems (unsurprisingly) to depend on data-distribution, so
is there a way I can force a GatherMerge?
 
Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable the
other plans.
 
What plan you are getting now? You not seeing the below error ?
 
ERROR:  GatherMerge child's targetlist doesn't match GatherMerge

 
I'm seeing the same error, it's just that for reproducing it I'd rather not
copy my whole dataset.
 
Can you share me a schema information, I will try to reproduce at my side?

 
The relevant schema is this:
 
drop table if EXISTS temp_email_address_owner; drop table if EXISTS
temp_email_delivery;drop table if EXISTS temp_email_message; create table
temp_email_message( entity_idBIGSERIAL PRIMARY KEY ); create table
temp_email_delivery( entity_idBIGSERIAL PRIMARY KEY, message_id bigint not null
referencestemp_email_message(entity_id), from_entity_id bigint,
received_timestamptimestamp not null ); create table temp_email_address_owner(
entity_idBIGSERIAL PRIMARY KEY, message_id bigint not null references
temp_email_message(entity_id), recipient_idbigint );
EXPLAIN ANALYSE SELECT em.entity_id FROM temp_email_delivery del JOIN
temp_email_message emON (del.message_id = em.entity_id) WHERE
del.from_entity_id =279519 OR em.entity_id IN ( SELECT ea_owner.message_id FROM
temp_email_address_owner ea_ownerWHERE ea_owner.recipient_id = 279519 ) ORDER BY
del.received_timestampDESC LIMIT 101 OFFSET 0;
 
.. But I'm having a hard time reproducing it.
I've tried to copy data from the relevant tables to the test-tables (temp_*),
adding indexes etc. but Gathre Merge works just fine:
 
│ Limit  (cost=209378.96..209391.05 rows=101 width=16) (actual
time=799.380..799.432 rows=101 loops=1)
                                                                                                      │
│   ->  Gather Merge  (cost=209378.96..262335.79 rows=442285 width=16)
(actual time=799.379..799.420 rows=101 loops=1)
                                                                                      │
│         Workers Planned: 4
                                                                                                                                                                                │
│         Workers Launched: 4
                                                                                                                                                                               │
│         ->  Sort  (cost=208378.90..208655.33 rows=110571 width=16) (actual
time=785.029..785.042 rows=81 loops=5)
                                                                                         │
│               Sort Key: del.received_timestamp DESC
                                                                                                                                                       │
│               Sort Method: quicksort  Memory: 29kB
                                                                                                                                                        │
│               ->  Hash Join  (cost=52036.86..204145.01 rows=110571
width=16) (actual time=400.812..784.907 rows=95 loops=5)
                                                                               │
│                     Hash Cond: (del.message_id = em.entity_id)
                                                                                                                                            │
│                     Join Filter: ((del.from_entity_id = 279519) OR (hashed
SubPlan 1))
                                                                                                                    │
│                     Rows Removed by Join Filter: 176799
                                                                                                                                                   │
│                     ->  Parallel Seq Scan on temp_email_delivery del
 (cost=0.00..142515.18 rows=221118 width=24) (actual time=0.033..211.196
rows=176894 loops=5)                                         │
│                     ->  Hash  (cost=39799.72..39799.72 rows=730772 width=8)
(actual time=368.746..368.746 rows=730772 loops=5)
                                                                            │
│                           Buckets: 1048576  Batches: 2  Memory Usage:
22496kB
                                                                                                                             │
│                           ->  Seq Scan on temp_email_message em
 (cost=0.00..39799.72 rows=730772 width=8) (actual time=0.017..208.116
rows=730772 loops=5)                                                │
│                     SubPlan 1
                                                                                                                                                                             │
│                       ->  Index Scan using
temp_email_address_owner_recipient_id_idx on temp_email_address_owner ea_owner
 (cost=0.43..247.32 rows=68 width=8) (actual time=0.072..0.759 rows=175
loops=5) │
│                             Index Cond: (recipient_id = 279519)
                                                                                                                                           │
│ Planning time: 2.134 ms
                                                                                                                                                                                   │
│ Execution time: 830.313 ms
                                                                                                                                                                                │

 
Can it be that the data-set is created with a PG-version from yesterday,
before Gather Merge was commited, then I just recompiled PG and re-installed
over the old installation without re-initdb'ing? I saw no catversion.h changes
so I assumed this was fine.
 
--
Andreas Joseph Krogh

#62Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Rushabh Lathia (#60)
2 attachment(s)
Re: Gather Merge

On Fri, Mar 10, 2017 at 3:04 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Mar 10, 2017 at 2:42 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

På fredag 10. mars 2017 kl. 10:09:22, skrev Rushabh Lathia
<rushabh.lathia@gmail.com>:

On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <andreas@visena.com>
wrote:

[...]
The execution-plan seems (unsurprisingly) to depend on data-distribution,
so is there a way I can force a GatherMerge?

Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable the
other plans.

What plan you are getting now? You not seeing the below error ?

ERROR: GatherMerge child's targetlist doesn't match GatherMerge

I'm seeing the same error, it's just that for reproducing it I'd rather
not copy my whole dataset.

Can you share me a schema information, I will try to reproduce at my side?

I'm able to reproduce the error. I've attached the dump file and a
script to reproduce it.

The following query executes successfully.
postgres=# explain select t1.* from t1 JOIN t2 ON t1.k=t2.k where
t1.i=1 order by t1.j desc;
QUERY PLAN
-------------------------------------------------------------------------------------------------------
Gather Merge (cost=0.58..243.02 rows=943 width=12)
Workers Planned: 1
-> Nested Loop (cost=0.57..235.94 rows=555 width=12)
-> Parallel Index Scan Backward using idx_t1_i_j on t1
(cost=0.29..14.33 rows=603 width=12)
Index Cond: (i = 1)
-> Index Only Scan using idx_t2_k on t2 (cost=0.29..0.34
rows=3 width=4)
Index Cond: (k = t1.k)
(7 rows)

Whereas, If columns from t2 is projected, it throws the same error.
postgres=# explain select t2.* from t1 JOIN t2 ON t1.k=t2.k where
t1.i=1 order by t1.j desc;
ERROR: GatherMerge child's targetlist doesn't match GatherMerge

--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com

Attachments:

dump.sqlapplication/sql; name=dump.sqlDownload
gm_error.sqlapplication/sql; name=gm_error.sqlDownload
#63Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Kuntal Ghosh (#62)
Re: Gather Merge

On Fri, Mar 10, 2017 at 4:09 PM, Kuntal Ghosh <kuntalghosh.2007@gmail.com>
wrote:

On Fri, Mar 10, 2017 at 3:04 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Mar 10, 2017 at 2:42 PM, Andreas Joseph Krogh <

andreas@visena.com>

wrote:

På fredag 10. mars 2017 kl. 10:09:22, skrev Rushabh Lathia
<rushabh.lathia@gmail.com>:

On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <

andreas@visena.com>

wrote:

[...]
The execution-plan seems (unsurprisingly) to depend on

data-distribution,

so is there a way I can force a GatherMerge?

Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable

the

other plans.

What plan you are getting now? You not seeing the below error ?

ERROR: GatherMerge child's targetlist doesn't match GatherMerge

I'm seeing the same error, it's just that for reproducing it I'd rather
not copy my whole dataset.

Can you share me a schema information, I will try to reproduce at my

side?
I'm able to reproduce the error. I've attached the dump file and a
script to reproduce it.

The following query executes successfully.
postgres=# explain select t1.* from t1 JOIN t2 ON t1.k=t2.k where
t1.i=1 order by t1.j desc;
QUERY PLAN
------------------------------------------------------------
-------------------------------------------
Gather Merge (cost=0.58..243.02 rows=943 width=12)
Workers Planned: 1
-> Nested Loop (cost=0.57..235.94 rows=555 width=12)
-> Parallel Index Scan Backward using idx_t1_i_j on t1
(cost=0.29..14.33 rows=603 width=12)
Index Cond: (i = 1)
-> Index Only Scan using idx_t2_k on t2 (cost=0.29..0.34
rows=3 width=4)
Index Cond: (k = t1.k)
(7 rows)

Whereas, If columns from t2 is projected, it throws the same error.
postgres=# explain select t2.* from t1 JOIN t2 ON t1.k=t2.k where
t1.i=1 order by t1.j desc;
ERROR: GatherMerge child's targetlist doesn't match GatherMerge

Thanks Kuntal.

I am able to reproduce the issue with the shared script. I will look into
this now.

--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com

--
Rushabh Lathia

#64Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Rushabh Lathia (#63)
1 attachment(s)
Re: Gather Merge

Error coming from create_gather_merge_plan() from below condition:

if (memcmp(sortColIdx, gm_plan->sortColIdx,
numsortkeys * sizeof(AttrNumber)) != 0)
elog(ERROR, "GatherMerge child's targetlist doesn't match
GatherMerge");

Above condition checks the sort column numbers explicitly, to ensure the
tlists
really do match up. This been copied from the create_merge_append_plan().
Now
this make sense as for MergeAppend as its not projection capable plan (see
is_projection_capable_plan()). But like Gather, GatherMerge is the
projection
capable and its target list can be different from the subplan, so I don't
think this
condition make sense for the GatherMerge.

Here is the some one the debugging info, through which I was able to reach
to this conclusion:

- targetlist for the GatherMerge and subpath is same during
create_gather_merge_path().

- targetlist for the GatherMerge is getting changed into
create_gather_merge_plan().

postgres=# explain (analyze, verbose) select t2.j from t1 JOIN t2 ON
t1.k=t2.k where t1.i=1 order by t1.j desc;
NOTICE: path parthtarget: {PATHTARGET :exprs ({VAR :varno 2 :varattno 2
:vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 2
:varoattno 2 :location 34} {VAR :varno 1 :varattno 2 :vartype 23 :vartypmod
-1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 2 :location 90})
:sortgrouprefs 0 1 :cost.startup 0.00 :cost.per_tuple 0.00 :width 8}

NOTICE: subpath parthtarget: {PATHTARGET :exprs ({VAR :varno 1 :varattno 2
:vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1
:varoattno 2 :location 90} {VAR :varno 2 :varattno 2 :vartype 23 :vartypmod
-1 :varcollid 0 :varlevelsup 0 :varnoold 2 :varoattno 2 :location 34})
:cost.startup 0.00 :cost.per_tuple 0.00 :width 8}

- Attached memory watch point and found that target list for GatherMerge is
getting
changed into groupping_planner() -> apply_projection_to_path().

PFA patch to fix this issue.

On Fri, Mar 10, 2017 at 4:12 PM, Rushabh Lathia <rushabh.lathia@gmail.com>
wrote:

On Fri, Mar 10, 2017 at 4:09 PM, Kuntal Ghosh <kuntalghosh.2007@gmail.com>
wrote:

On Fri, Mar 10, 2017 at 3:04 PM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

On Fri, Mar 10, 2017 at 2:42 PM, Andreas Joseph Krogh <

andreas@visena.com>

wrote:

På fredag 10. mars 2017 kl. 10:09:22, skrev Rushabh Lathia
<rushabh.lathia@gmail.com>:

On Fri, Mar 10, 2017 at 2:33 PM, Andreas Joseph Krogh <

andreas@visena.com>

wrote:

[...]
The execution-plan seems (unsurprisingly) to depend on

data-distribution,

so is there a way I can force a GatherMerge?

Not directly. GatherMerge cost is mainly depend on parallel_setup_cost,
parallel_tuple_cost and cpu_operator_cost. May be you can force this
by setting this cost low enough. Or another way to force is by disable

the

other plans.

What plan you are getting now? You not seeing the below error ?

ERROR: GatherMerge child's targetlist doesn't match GatherMerge

I'm seeing the same error, it's just that for reproducing it I'd rather
not copy my whole dataset.

Can you share me a schema information, I will try to reproduce at my

side?
I'm able to reproduce the error. I've attached the dump file and a
script to reproduce it.

The following query executes successfully.
postgres=# explain select t1.* from t1 JOIN t2 ON t1.k=t2.k where
t1.i=1 order by t1.j desc;
QUERY PLAN
------------------------------------------------------------
-------------------------------------------
Gather Merge (cost=0.58..243.02 rows=943 width=12)
Workers Planned: 1
-> Nested Loop (cost=0.57..235.94 rows=555 width=12)
-> Parallel Index Scan Backward using idx_t1_i_j on t1
(cost=0.29..14.33 rows=603 width=12)
Index Cond: (i = 1)
-> Index Only Scan using idx_t2_k on t2 (cost=0.29..0.34
rows=3 width=4)
Index Cond: (k = t1.k)
(7 rows)

Whereas, If columns from t2 is projected, it throws the same error.
postgres=# explain select t2.* from t1 JOIN t2 ON t1.k=t2.k where
t1.i=1 order by t1.j desc;
ERROR: GatherMerge child's targetlist doesn't match GatherMerge

Thanks Kuntal.

I am able to reproduce the issue with the shared script. I will look into
this now.

--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com

--
Rushabh Lathia

--
Rushabh Lathia

Attachments:

gm_plan_fix.patchapplication/x-download; name=gm_plan_fix.patchDownload
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index d002e6d..73a71d7 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1499,7 +1499,6 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
 									  &gm_plan->collations,
 									  &gm_plan->nullsFirst);
 
-
 	/* Compute sort column info, and adjust subplan's tlist as needed */
 	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
 										 best_path->subpath->parent->relids,
@@ -1511,11 +1510,13 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
 										 &collations,
 										 &nullsFirst);
 
-	/* As for MergeAppend, check that we got the same sort key information. */
+	/*
+	 * As for MergeAppend, check that we got the same sort key information.
+	 * Unlike MergeAppend, for GatherMerge its not necessary that tlists
+	 * for the path and subpath match up as GatherMerge is projection capable
+	 * plan. So removed that check from here.
+	 */
 	Assert(numsortkeys == gm_plan->numCols);
-	if (memcmp(sortColIdx, gm_plan->sortColIdx,
-			   numsortkeys * sizeof(AttrNumber)) != 0)
-		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
 	Assert(memcmp(sortOperators, gm_plan->sortOperators,
 				  numsortkeys * sizeof(Oid)) == 0);
 	Assert(memcmp(collations, gm_plan->collations,
#65Robert Haas
robertmhaas@gmail.com
In reply to: Rushabh Lathia (#64)
1 attachment(s)
Re: Gather Merge

On Fri, Mar 10, 2017 at 7:59 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Error coming from create_gather_merge_plan() from below condition:

if (memcmp(sortColIdx, gm_plan->sortColIdx,
numsortkeys * sizeof(AttrNumber)) != 0)
elog(ERROR, "GatherMerge child's targetlist doesn't match
GatherMerge");

Above condition checks the sort column numbers explicitly, to ensure the
tlists
really do match up. This been copied from the create_merge_append_plan().
Now
this make sense as for MergeAppend as its not projection capable plan (see
is_projection_capable_plan()). But like Gather, GatherMerge is the
projection
capable and its target list can be different from the subplan, so I don't
think this
condition make sense for the GatherMerge.

Here is the some one the debugging info, through which I was able to reach
to this conclusion:

- targetlist for the GatherMerge and subpath is same during
create_gather_merge_path().

- targetlist for the GatherMerge is getting changed into
create_gather_merge_plan().

postgres=# explain (analyze, verbose) select t2.j from t1 JOIN t2 ON
t1.k=t2.k where t1.i=1 order by t1.j desc;
NOTICE: path parthtarget: {PATHTARGET :exprs ({VAR :varno 2 :varattno 2
:vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 2 :varoattno
2 :location 34} {VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1
:varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 2 :location 90})
:sortgrouprefs 0 1 :cost.startup 0.00 :cost.per_tuple 0.00 :width 8}

NOTICE: subpath parthtarget: {PATHTARGET :exprs ({VAR :varno 1 :varattno 2
:vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno
2 :location 90} {VAR :varno 2 :varattno 2 :vartype 23 :vartypmod -1
:varcollid 0 :varlevelsup 0 :varnoold 2 :varoattno 2 :location 34})
:cost.startup 0.00 :cost.per_tuple 0.00 :width 8}

- Attached memory watch point and found that target list for GatherMerge is
getting
changed into groupping_planner() -> apply_projection_to_path().

PFA patch to fix this issue.

I don't think this fix is correct, partly on theoretical grounds and
partly because I managed to make it crash. The problem is that
prepare_sort_for_pathkeys() actually alters the output tlist of Gather
Merge, which is inconsistent with the idea that Gather Merge can do
projection. It's going to produce whatever
prepare_sort_for_pathkeys() says it's going to produce, which may or
may not be what was there before. Using Kuntal's dump file and your
patch:

set min_parallel_table_scan_size = 0;
set parallel_setup_cost = 0;
set parallel_tuple_cost = 0;
set enable_sort = false;
set enable_bitmapscan = false;
alter table t1 alter column j type text;
select t2.i from t1 join t2 on t1.k=t2.k where t1.i=1 order by t1.j desc;

Crash. Abbreviated stack trace:

#0 pg_detoast_datum_packed (datum=0xbc) at fmgr.c:2176
#1 0x000000010160e707 in varstrfastcmp_locale (x=188, y=819,
ssup=0x7fe1ea06a568) at varlena.c:1997
#2 0x00000001013efc73 in ApplySortComparator [inlined] () at
/Users/rhaas/pgsql/src/include/utils/sortsupport.h:225
#3 0x00000001013efc73 in heap_compare_slots (a=<value temporarily
unavailable, due to optimizations>, b=<value temporarily unavailable,
due to optimizations>, arg=0x7fe1ea04e590) at sortsupport.h:681
#4 0x00000001014057b2 in sift_down (heap=0x7fe1ea079458,
node_off=<value temporarily unavailable, due to optimizations>) at
binaryheap.c:274
#5 0x000000010140573a in binaryheap_build (heap=0x7fe1ea079458) at
binaryheap.c:131
#6 0x00000001013ef771 in gather_merge_getnext [inlined] () at
/Users/rhaas/pgsql/src/backend/executor/nodeGatherMerge.c:421
#7 0x00000001013ef771 in ExecGatherMerge (node=0x7fe1ea04e590) at
nodeGatherMerge.c:240

Obviously, this is happening because we're trying to apply a
comparator for text to a value of type integer. I propose the
attached, slightly more involved fix, which rips out the first call to
prepare_sort_from_pathkeys() altogether.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

gm_plan_fix_rmh.patchapplication/octet-stream; name=gm_plan_fix_rmh.patchDownload
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index d002e6d..64f0ee5 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1469,17 +1469,12 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
 	GatherMerge *gm_plan;
 	Plan	   *subplan;
 	List	   *pathkeys = best_path->path.pathkeys;
-	int			numsortkeys;
-	AttrNumber *sortColIdx;
-	Oid		   *sortOperators;
-	Oid		   *collations;
-	bool	   *nullsFirst;
 	List	   *tlist = build_path_tlist(root, &best_path->path);
 
 	/* As with Gather, it's best to project away columns in the workers. */
 	subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST);
 
-	/* See create_merge_append_plan for why there's no make_xxx function */
+	/* Create a shell for a GatherMerge plan. */
 	gm_plan = makeNode(GatherMerge);
 	gm_plan->plan.targetlist = tlist;
 	gm_plan->num_workers = best_path->num_workers;
@@ -1488,46 +1483,25 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path)
 	/* Gather Merge is pointless with no pathkeys; use Gather instead. */
 	Assert(pathkeys != NIL);
 
-	/* Compute sort column info, and adjust GatherMerge tlist as needed */
-	(void) prepare_sort_from_pathkeys(&gm_plan->plan, pathkeys,
-									  best_path->path.parent->relids,
-									  NULL,
-									  true,
-									  &gm_plan->numCols,
-									  &gm_plan->sortColIdx,
-									  &gm_plan->sortOperators,
-									  &gm_plan->collations,
-									  &gm_plan->nullsFirst);
-
-
 	/* Compute sort column info, and adjust subplan's tlist as needed */
 	subplan = prepare_sort_from_pathkeys(subplan, pathkeys,
 										 best_path->subpath->parent->relids,
 										 gm_plan->sortColIdx,
 										 false,
-										 &numsortkeys,
-										 &sortColIdx,
-										 &sortOperators,
-										 &collations,
-										 &nullsFirst);
+										 &gm_plan->numCols,
+										 &gm_plan->sortColIdx,
+										 &gm_plan->sortOperators,
+										 &gm_plan->collations,
+										 &gm_plan->nullsFirst);
 
-	/* As for MergeAppend, check that we got the same sort key information. */
-	Assert(numsortkeys == gm_plan->numCols);
-	if (memcmp(sortColIdx, gm_plan->sortColIdx,
-			   numsortkeys * sizeof(AttrNumber)) != 0)
-		elog(ERROR, "GatherMerge child's targetlist doesn't match GatherMerge");
-	Assert(memcmp(sortOperators, gm_plan->sortOperators,
-				  numsortkeys * sizeof(Oid)) == 0);
-	Assert(memcmp(collations, gm_plan->collations,
-				  numsortkeys * sizeof(Oid)) == 0);
-	Assert(memcmp(nullsFirst, gm_plan->nullsFirst,
-				  numsortkeys * sizeof(bool)) == 0);
 
 	/* Now, insert a Sort node if subplan isn't sufficiently ordered */
 	if (!pathkeys_contained_in(pathkeys, best_path->subpath->pathkeys))
-		subplan = (Plan *) make_sort(subplan, numsortkeys,
-									 sortColIdx, sortOperators,
-									 collations, nullsFirst);
+		subplan = (Plan *) make_sort(subplan, gm_plan->numCols,
+									 gm_plan->sortColIdx,
+									 gm_plan->sortOperators,
+									 gm_plan->collations,
+									 gm_plan->nullsFirst);
 
 	/* Now insert the subplan under GatherMerge. */
 	gm_plan->plan.lefttree = subplan;
#66Rushabh Lathia
rushabh.lathia@gmail.com
In reply to: Robert Haas (#65)
Re: Gather Merge

On Mon, Mar 13, 2017 at 10:56 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Mar 10, 2017 at 7:59 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Error coming from create_gather_merge_plan() from below condition:

if (memcmp(sortColIdx, gm_plan->sortColIdx,
numsortkeys * sizeof(AttrNumber)) != 0)
elog(ERROR, "GatherMerge child's targetlist doesn't match
GatherMerge");

Above condition checks the sort column numbers explicitly, to ensure the
tlists
really do match up. This been copied from the create_merge_append_plan().
Now
this make sense as for MergeAppend as its not projection capable plan

(see

is_projection_capable_plan()). But like Gather, GatherMerge is the
projection
capable and its target list can be different from the subplan, so I don't
think this
condition make sense for the GatherMerge.

Here is the some one the debugging info, through which I was able to

reach

to this conclusion:

- targetlist for the GatherMerge and subpath is same during
create_gather_merge_path().

- targetlist for the GatherMerge is getting changed into
create_gather_merge_plan().

postgres=# explain (analyze, verbose) select t2.j from t1 JOIN t2 ON
t1.k=t2.k where t1.i=1 order by t1.j desc;
NOTICE: path parthtarget: {PATHTARGET :exprs ({VAR :varno 2 :varattno 2
:vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 2

:varoattno

2 :location 34} {VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1
:varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 2 :location 90})
:sortgrouprefs 0 1 :cost.startup 0.00 :cost.per_tuple 0.00 :width 8}

NOTICE: subpath parthtarget: {PATHTARGET :exprs ({VAR :varno 1

:varattno 2

:vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1

:varoattno

2 :location 90} {VAR :varno 2 :varattno 2 :vartype 23 :vartypmod -1
:varcollid 0 :varlevelsup 0 :varnoold 2 :varoattno 2 :location 34})
:cost.startup 0.00 :cost.per_tuple 0.00 :width 8}

- Attached memory watch point and found that target list for GatherMerge

is

getting
changed into groupping_planner() -> apply_projection_to_path().

PFA patch to fix this issue.

I don't think this fix is correct, partly on theoretical grounds and
partly because I managed to make it crash. The problem is that
prepare_sort_for_pathkeys() actually alters the output tlist of Gather
Merge, which is inconsistent with the idea that Gather Merge can do
projection. It's going to produce whatever
prepare_sort_for_pathkeys() says it's going to produce, which may or
may not be what was there before. Using Kuntal's dump file and your
patch:

set min_parallel_table_scan_size = 0;
set parallel_setup_cost = 0;
set parallel_tuple_cost = 0;
set enable_sort = false;
set enable_bitmapscan = false;
alter table t1 alter column j type text;
select t2.i from t1 join t2 on t1.k=t2.k where t1.i=1 order by t1.j desc;

Crash. Abbreviated stack trace:

#0 pg_detoast_datum_packed (datum=0xbc) at fmgr.c:2176
#1 0x000000010160e707 in varstrfastcmp_locale (x=188, y=819,
ssup=0x7fe1ea06a568) at varlena.c:1997
#2 0x00000001013efc73 in ApplySortComparator [inlined] () at
/Users/rhaas/pgsql/src/include/utils/sortsupport.h:225
#3 0x00000001013efc73 in heap_compare_slots (a=<value temporarily
unavailable, due to optimizations>, b=<value temporarily unavailable,
due to optimizations>, arg=0x7fe1ea04e590) at sortsupport.h:681
#4 0x00000001014057b2 in sift_down (heap=0x7fe1ea079458,
node_off=<value temporarily unavailable, due to optimizations>) at
binaryheap.c:274
#5 0x000000010140573a in binaryheap_build (heap=0x7fe1ea079458) at
binaryheap.c:131
#6 0x00000001013ef771 in gather_merge_getnext [inlined] () at
/Users/rhaas/pgsql/src/backend/executor/nodeGatherMerge.c:421
#7 0x00000001013ef771 in ExecGatherMerge (node=0x7fe1ea04e590) at
nodeGatherMerge.c:240

Obviously, this is happening because we're trying to apply a
comparator for text to a value of type integer. I propose the
attached, slightly more involved fix, which rips out the first call to
prepare_sort_from_pathkeys() altogether.

Thanks Robert for the patch and the explanation.

I studied the patch and that look right to me. I performed manual testing,
run the scripts which I created during the gather merge patch also run
the tpch queries to make sure that it all working good.

I haven't found any regression the that changes.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Rushabh Lathia

#67Robert Haas
robertmhaas@gmail.com
In reply to: Rushabh Lathia (#66)
Re: Gather Merge

On Tue, Mar 14, 2017 at 5:47 AM, Rushabh Lathia
<rushabh.lathia@gmail.com> wrote:

Thanks Robert for the patch and the explanation.

I studied the patch and that look right to me. I performed manual testing,
run the scripts which I created during the gather merge patch also run
the tpch queries to make sure that it all working good.

I haven't found any regression the that changes.

Cool, thanks for the review. I'm not quite confident that we've found
all of the bugs here yet, but I think we're moving in the right
direction.

Committed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#68Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#67)
Re: Gather Merge

Robert Haas <robertmhaas@gmail.com> writes:

Cool, thanks for the review. I'm not quite confident that we've found
all of the bugs here yet, but I think we're moving in the right
direction.

I guess the real question here is why isn't Gather Merge more like
Append and MergeAppend? That is, why did you complicate matters
by making it projection capable? That seems like a pretty random
decision from here.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#69Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#68)
Re: Gather Merge

On Tue, Mar 14, 2017 at 9:56 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Cool, thanks for the review. I'm not quite confident that we've found
all of the bugs here yet, but I think we're moving in the right
direction.

I guess the real question here is why isn't Gather Merge more like
Append and MergeAppend? That is, why did you complicate matters
by making it projection capable? That seems like a pretty random
decision from here.

Well, then it would be inconsistent with plain old Gather. I thought
about going that route - ripping whatever projection logic Gather has
out and teaching the system that it's not projection-capable - but I
don't see what that buys us. It's pretty useful to be able to project
on top of Gather-type nodes, because they will often be at the top of
the plan, just before returning the results to the user.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers