a raft of parallelism-related bug fixes

Started by Robert Haasover 10 years ago79 messages
#1Robert Haas
robertmhaas@gmail.com
14 attachment(s)

My recent commit of the Gather executor node has made it relatively
simple to write code that does an end-to-end test of all of the
parallelism-relate commits which have thus far gone into the tree.
Specifically, what I've done is hacked the planner to push a
single-copy Gather node on top of every plan that is thought to be
parallel-safe, and then run 'make check'. This turned up bugs in
nearly every parallelism-related commit that has thus far gone into
the tree, which is a little depressing, especially because some of
them are what we've taken to calling brown paper bag bugs. The good
news is that, with one or two exceptions, these are pretty much just
trivial oversights which are simple to fix, rather than any sort of
deeper design issue. Attached are 14 patches. Patches #1-#4 are
essential for testing purposes but are not proposed for commit,
although some of the code they contain may eventually become part of
other patches which are proposed for commit. Patches #5-#12 are
largely boring patches fixing fairly uninteresting mistakes; I propose
to commit these on an expedited basis. Patches #13-14 are also
proposed for commit but seem to me to be more in need of review. With
all of these patches, I can now get a clean 'make check' run, although
I think there are a few bugs remaining to be fixed because some of my
colleagues still experience misbehavior even with all of these patches
applied. The patch stack is also posted here; the branch is subject
to rebasing:

http://git.postgresql.org/gitweb/?p=users/rhaas/postgres.git;a=shortlog;h=refs/heads/gathertest

Here follows an overview of each individual patch (see also commit
messages within).

== For Testing Only ==

0001-Test-code.patch is the basic test code. In addition to pushing a
Gather node on top of apparently-safe parallel plans, it also ignores
that Gather node when generating EXPLAIN output and suppressing
parallel context in error messages, changes which are essential to
getting the regression tests to pass. I'm wondering if the parallel
context error ought to be GUC-controlled, defaulting to on but capable
of being enabled on request.

0002-contain_parallel_unsafe-check_parallel_safety.patch and
0003-Temporary-hack-to-reduce-testing-failures.patch arrange NOT to
put Gather nodes on top of plans that contain parallel-restricted
operations or refer to temporary tables. Although such things can
exist in a parallel plan, they must be above every Gather node, not
beneath it. Here, the Gather node is being placed (strictly for
testing purposes) at the very top, so we must not insert it at all if
these things are present.

0004-Partial-group-locking-implementation.patch is a partial
implementation of group locking. I found that without this, the
regression tests hang frequently, and a clean run is impossible. This
patch doesn't modify the deadlock detector, and it doesn't take any
account of locks that should be mutually exclusive even as between
members of a parallel group, but it's enough for a clean regression
test run. We will need a full solution to this problem soon enough,
but right now I am only using this to find such unrelated bugs as we
may have.

== Proposed For Commit ==

0005-Don-t-send-protocol-messages-to-a-shm_mq-that-no-lon.patch fixes
a problem in the parallel worker shutdown sequence: a background
worker can choose to redirect messages that would normally be sent to
a client to a shm_mq, and parallel workers always do this. But if the
worker generates a message after the DSM has been detached, it causes
a server crash.

0006-Transfer-current-command-counter-ID-to-parallel-work.patch fixes
a problem in the code used to set up a parallel worker's transaction
state. The command counter is presently not copied to the worker.
This is awfully embarrassing and should have been caught in the
testing of the parallel mode/contexts patch, but I got overly focused
on the stuff stored inside TransactionStateData. Don't shoot.

0007-Tighten-up-application-of-parallel-mode-checks.patch fixes
another problem with the parallel mode checks, which are intended to
catch people doing unsafe things and throw errors instead of letting
them crash the server. Investigation reveals that they don't have
this effect because parallel workers were running their pre-commit
sequence with the checks disabled. If they do something like try to
send notifications, it can lead to the worker getting an XID
assignment even though the master doesn't have one. That's really
bad, and crashes the server. That specific example should be
prohibited anyway (see patch #11) but even if we fix that I think this
is good tightening to prevent unpleasant surprises in the future.

0008-Invalidate-caches-after-cranking-up-a-parallel-worke.patch
invalidates invalidates system cache entries after cranking up a
parallel worker transaction. This is needed here for the same reason
that the logical decoding code needs to do it after time traveling:
otherwise, the worker might have leftover entries in its caches as a
result of the startup transaction that are now bogus given the changes
in what it can see.

0009-Fix-a-problem-with-parallel-workers-being-unable-to-.patch fixes
a problem with workers being unable to precisely recreate the
authorization state as it existed in the parallel leader. They need to
do that, or else it's a security vulnerability.

0010-Prohibit-parallel-query-when-the-isolation-level-is-.patch
prohibits parallel query at the serializable isolation level. This is
of course a restriction we'd rather not have, but it's a necessary one
for now because the predicate locking code doesn't understand the idea
of multiple processes with separate PGPROC structures being part of a
single transaction.

0011-Mark-more-functions-parallel-restricted-or-parallel-.patch marks
some functions as parallel-restricted or parallel-unsafe that in fact
are, but were not so marked by the commit that introduced the new
pg_proc flag. This includes functions for sending notifications and a
few others.

0012-Rewrite-interaction-of-parallel-mode-with-parallel-e.patch
rejiggers the timing of enabling and disabling parallel mode when we
are attempting parallel execution. The old coding turned out to be
fragile in multiple ways. Since it's impractical to know at planning
time with ExecutorRun will be called with a non-zero tuple count, this
patch instead observes whether or not this happens, and if it does
happen, the parallel plan is forced to run serially. In the old
coding, it instead just killed the parallel workers at the end of
ExecutorRun and therefore returned an incomplete result set. There
might be some further rejiggering that could be done here that would
be even better than this, but I'm fairly certain this is better than
what we've got in the tree right now.

0013-Modify-tqueue-infrastructure-to-support-transient-re.patch
attempts to address a deficiency in the tqueue.c/tqueue.h machinery I
recently introduced: backends can have ephemeral record types for
which they use backend-local typmods that may not be the same between
the leader and the worker. This patch has the worker send metadata
about the tuple descriptor for each such type, and the leader
registers the same tuple descriptor and then remaps the typmods from
the worker's typmod space to its own. This seems to work, but I'm a
little concerned that there may be cases it doesn't cover. Also,
there's room to question the overall approach. The only other
alternative that springs readily to mind is to try to arrange things
during the planning phase so that we never try to pass records between
parallel backends in this way, but that seems like it would be hard to
code (and thus likely to have bugs) and also pretty limiting.

0014-Fix-problems-with-ParamListInfo-serialization-mechan.patch, which
I just posted on the Parallel Seq Scan thread as a standalone patch,
fixes pretty much what the name of the file suggests. This actually
fixes two problems, one of which Noah spotted and commented on over on
that thread. By pure coincidence, the last 'make check' regression
failure I was still troubleshooting needed a fix for that issue plus a
fix to plpgsql_param_fetch. However, as I mentioned on the other
thread, I'm not quite sure which way to go with the change to
plpgsql_param_fetch so scrutiny of that point, in particular, would be
appreciated. See also
/messages/by-id/CA+TgmobN=wADVaUTwsH-xqvCdovkeRasuXw2k3R6vmpWig7raw@mail.gmail.com

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

0004-Partial-group-locking-implementation.patchapplication/x-patch; name=0004-Partial-group-locking-implementation.patchDownload
From ea288aef684c084cafe649f3d8cfe1c928994770 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Sat, 3 Oct 2015 13:34:35 -0400
Subject: [PATCH 04/14] Partial group locking implementation.

This doesn't touch deadlock.c but it's enough to get the regression
tests working with stuff pushed under Gather nodes.
---
 src/backend/access/transam/parallel.c |  16 ++++
 src/backend/storage/lmgr/lock.c       | 123 ++++++++++++++++++++++++-----
 src/backend/storage/lmgr/proc.c       | 143 +++++++++++++++++++++++++++++++++-
 src/include/storage/lock.h            |   2 +-
 src/include/storage/proc.h            |   7 ++
 5 files changed, 267 insertions(+), 24 deletions(-)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 3041dab..90735df 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -386,6 +386,9 @@ LaunchParallelWorkers(ParallelContext *pcxt)
 	if (pcxt->nworkers == 0)
 		return;
 
+	/* We need to be a lock group leader. */
+	BecomeLockGroupLeader();
+
 	/* If we do have workers, we'd better have a DSM segment. */
 	Assert(pcxt->seg != NULL);
 
@@ -889,6 +892,19 @@ ParallelWorkerMain(Datum main_arg)
 	 */
 
 	/*
+	 * Join locking group.  We must do this before anything that could try
+	 * to acquire a heavyweight lock, because any heavyweight locks acquired
+	 * to this point could block either directly against the parallel group
+	 * leader or against some process which in turn waits for a lock that
+	 * conflicts with the parallel group leader, causing an undetected
+	 * deadlock.  (If we can't join the lock group, the leader has gone away,
+	 * so just exit quietly.)
+	 */
+	if (!BecomeLockGroupMember(fps->parallel_master_pgproc,
+							   fps->parallel_master_pid))
+		return;
+
+	/*
 	 * Load libraries that were loaded by original backend.  We want to do
 	 * this before restoring GUCs, because the libraries might define custom
 	 * variables.
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 76fc615..de6a05e 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -35,6 +35,7 @@
 #include "access/transam.h"
 #include "access/twophase.h"
 #include "access/twophase_rmgr.h"
+#include "access/xact.h"
 #include "access/xlog.h"
 #include "miscadmin.h"
 #include "pg_trace.h"
@@ -706,6 +707,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
 	lockMethodTable = LockMethods[lockmethodid];
 	if (lockmode <= 0 || lockmode > lockMethodTable->numLockModes)
 		elog(ERROR, "unrecognized lock mode: %d", lockmode);
+	Assert(!IsInParallelMode() || MyProc->lockGroupLeader != NULL);
 
 	if (RecoveryInProgress() && !InRecovery &&
 		(locktag->locktag_type == LOCKTAG_OBJECT ||
@@ -1136,6 +1138,18 @@ SetupLockInTable(LockMethod lockMethodTable, PGPROC *proc,
 	{
 		uint32		partition = LockHashPartition(hashcode);
 
+		/*
+		 * It might seem unsafe to access proclock->groupLeader without a lock,
+		 * but it's not really.  Either we are initializing a proclock on our
+		 * own behalf, in which case our group leader isn't changing because
+		 * the group leader for a process can only ever be changed by the
+		 * process itself; or else we are transferring a fast-path lock to the
+		 * main lock table, in which case that process can't change it's lock
+		 * group leader without first releasing all of its locks (and in
+		 * particular the one we are currently transferring).
+		 */
+		proclock->groupLeader = proc->lockGroupLeader != NULL ?
+			proc->lockGroupLeader : proc;
 		proclock->holdMask = 0;
 		proclock->releaseMask = 0;
 		/* Add proclock to appropriate lists */
@@ -1255,9 +1269,10 @@ RemoveLocalLock(LOCALLOCK *locallock)
  * NOTES:
  *		Here's what makes this complicated: one process's locks don't
  * conflict with one another, no matter what purpose they are held for
- * (eg, session and transaction locks do not conflict).
- * So, we must subtract off our own locks when determining whether the
- * requested new lock conflicts with those already held.
+ * (eg, session and transaction locks do not conflict).  Nor do the locks
+ * of one process in a lock group conflict with those of another process in
+ * the same group.  So, we must subtract off these locks when determining
+ * whether the requested new lock conflicts with those already held.
  */
 int
 LockCheckConflicts(LockMethod lockMethodTable,
@@ -1267,8 +1282,12 @@ LockCheckConflicts(LockMethod lockMethodTable,
 {
 	int			numLockModes = lockMethodTable->numLockModes;
 	LOCKMASK	myLocks;
-	LOCKMASK	otherLocks;
+	int			conflictMask = lockMethodTable->conflictTab[lockmode];
+	int			conflictsRemaining[MAX_LOCKMODES];
+	int			totalConflictsRemaining = 0;
 	int			i;
+	SHM_QUEUE  *procLocks;
+	PROCLOCK   *otherproclock;
 
 	/*
 	 * first check for global conflicts: If no locks conflict with my request,
@@ -1279,40 +1298,91 @@ LockCheckConflicts(LockMethod lockMethodTable,
 	 * type of lock that conflicts with request.   Bitwise compare tells if
 	 * there is a conflict.
 	 */
-	if (!(lockMethodTable->conflictTab[lockmode] & lock->grantMask))
+	if (!(conflictMask & lock->grantMask))
 	{
 		PROCLOCK_PRINT("LockCheckConflicts: no conflict", proclock);
 		return STATUS_OK;
 	}
 
 	/*
-	 * Rats.  Something conflicts.  But it could still be my own lock. We have
-	 * to construct a conflict mask that does not reflect our own locks, but
-	 * only lock types held by other processes.
+	 * Rats.  Something conflicts.  But it could still be my own lock, or
+	 * a lock held by another member of my locking group.  First, figure out
+	 * how many conflicts remain after subtracting out any locks I hold
+	 * myself.
 	 */
 	myLocks = proclock->holdMask;
-	otherLocks = 0;
 	for (i = 1; i <= numLockModes; i++)
 	{
-		int			myHolding = (myLocks & LOCKBIT_ON(i)) ? 1 : 0;
+		if ((conflictMask & LOCKBIT_ON(i)) == 0)
+		{
+			conflictsRemaining[i] = 0;
+			continue;
+		}
+		conflictsRemaining[i] = lock->granted[i];
+		if (myLocks & LOCKBIT_ON(i))
+			--conflictsRemaining[i];
+		totalConflictsRemaining += conflictsRemaining[i];
+	}
 
-		if (lock->granted[i] > myHolding)
-			otherLocks |= LOCKBIT_ON(i);
+	/* If no conflicts remain, we get the lock. */
+	if (totalConflictsRemaining == 0)
+	{
+		PROCLOCK_PRINT("LockCheckConflicts: resolved (simple)", proclock);
+		return STATUS_OK;
+	}
+
+	/* If no group locking, it's definitely a conflict. */
+	if (proclock->groupLeader == MyProc && MyProc->lockGroupLeader == NULL)
+	{
+		Assert(proclock->tag.myProc == MyProc);
+		PROCLOCK_PRINT("LockCheckConflicts: conflicting (simple)",
+					   proclock);
+		return STATUS_FOUND;
 	}
 
 	/*
-	 * now check again for conflicts.  'otherLocks' describes the types of
-	 * locks held by other processes.  If one of these conflicts with the kind
-	 * of lock that I want, there is a conflict and I have to sleep.
+	 * Locks held in conflicting modes by members of our own lock group are
+	 * not real conflicts; we can subtract those out and see if we still have
+	 * a conflict.  This is O(N) in the number of processes holding or awaiting
+	 * locks on this object.  We could improve that by making the shared memory
+	 * state more complex (and larger) but it doesn't seem worth it.
 	 */
-	if (!(lockMethodTable->conflictTab[lockmode] & otherLocks))
+	procLocks = &(lock->procLocks);
+	otherproclock = (PROCLOCK *)
+		SHMQueueNext(procLocks, procLocks, offsetof(PROCLOCK, lockLink));
+	while (otherproclock != NULL)
 	{
-		/* no conflict. OK to get the lock */
-		PROCLOCK_PRINT("LockCheckConflicts: resolved", proclock);
-		return STATUS_OK;
+		if (proclock != otherproclock &&
+			proclock->groupLeader == otherproclock->groupLeader &&
+			(otherproclock->holdMask & conflictMask) != 0)
+		{
+			int	intersectMask = otherproclock->holdMask & conflictMask;
+
+			for (i = 1; i <= numLockModes; i++)
+			{
+				if ((intersectMask & LOCKBIT_ON(i)) != 0)
+				{
+					if (conflictsRemaining[i] <= 0)
+						elog(PANIC, "proclocks held do not match lock");
+					conflictsRemaining[i]--;
+					totalConflictsRemaining--;
+				}
+			}
+
+			if (totalConflictsRemaining == 0)
+			{
+				PROCLOCK_PRINT("LockCheckConflicts: resolved (group)",
+							   proclock);
+				return STATUS_OK;
+			}
+		}
+		otherproclock = (PROCLOCK *)
+			SHMQueueNext(procLocks, &otherproclock->lockLink,
+						 offsetof(PROCLOCK, lockLink));
 	}
 
-	PROCLOCK_PRINT("LockCheckConflicts: conflicting", proclock);
+	/* Nope, it's a real conflict. */
+	PROCLOCK_PRINT("LockCheckConflicts: conflicting (group)", proclock);
 	return STATUS_FOUND;
 }
 
@@ -3095,6 +3165,10 @@ PostPrepare_Locks(TransactionId xid)
 	PROCLOCKTAG proclocktag;
 	int			partition;
 
+	/* Can't prepare a lock group follower. */
+	Assert(MyProc->lockGroupLeader == NULL ||
+		   MyProc->lockGroupLeader == MyProc);
+
 	/* This is a critical section: any error means big trouble */
 	START_CRIT_SECTION();
 
@@ -3239,6 +3313,13 @@ PostPrepare_Locks(TransactionId xid)
 			proclocktag.myProc = newproc;
 
 			/*
+			 * Update groupLeader pointer to point to the new proc.  (We'd
+			 * better not be a member of somebody else's lock group!)
+			 */
+			Assert(proclock->groupLeader == proclock->tag.myProc);
+			proclock->groupLeader = newproc;
+
+			/*
 			 * Update the proclock.  We should not find any existing entry for
 			 * the same hash key, since there can be only one entry for any
 			 * given lock with my own proc.
@@ -3785,6 +3866,8 @@ lock_twophase_recover(TransactionId xid, uint16 info,
 	 */
 	if (!found)
 	{
+		Assert(proc->lockGroupLeader == NULL);
+		proclock->groupLeader = proc;
 		proclock->holdMask = 0;
 		proclock->releaseMask = 0;
 		/* Add proclock to appropriate lists */
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 2c2535b..2d55626 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -399,6 +399,11 @@ InitProcess(void)
 	MyProc->backendLatestXid = InvalidTransactionId;
 	pg_atomic_init_u32(&MyProc->nextClearXidElem, INVALID_PGPROCNO);
 
+	/* Check that group locking fields are in a proper initial state. */
+	Assert(MyProc->lockGroupLeaderIdentifier == 0);
+	Assert(MyProc->lockGroupLeader == NULL);
+	Assert(MyProc->lockGroupSize == 0);
+
 	/*
 	 * Acquire ownership of the PGPROC's latch, so that we can use WaitLatch
 	 * on it.  That allows us to repoint the process latch, which so far
@@ -558,6 +563,11 @@ InitAuxiliaryProcess(void)
 	OwnLatch(&MyProc->procLatch);
 	SwitchToSharedLatch();
 
+	/* Check that group locking fields are in a proper initial state. */
+	Assert(MyProc->lockGroupLeaderIdentifier == 0);
+	Assert(MyProc->lockGroupLeader == NULL);
+	Assert(MyProc->lockGroupSize == 0);
+
 	/*
 	 * We might be reusing a semaphore that belonged to a failed process. So
 	 * be careful and reinitialize its value here.  (This is not strictly
@@ -803,6 +813,33 @@ ProcKill(int code, Datum arg)
 	if (MyReplicationSlot != NULL)
 		ReplicationSlotRelease();
 
+	/* Detach from any lock group of which we are a member. */
+	if (MyProc->lockGroupLeader != NULL)
+	{
+		PGPROC	   *leader = MyProc->lockGroupLeader;
+
+		LWLockAcquire(leader->backendLock, LW_EXCLUSIVE);
+		Assert(leader->lockGroupSize > 0);
+		if (--leader->lockGroupSize == 0)
+		{
+			leader->lockGroupLeaderIdentifier = 0;
+			leader->lockGroupLeader = NULL;
+			if (leader != MyProc)
+			{
+				procgloballist = leader->procgloballist;
+
+				/* Leader exited first; return its PGPROC. */
+				SpinLockAcquire(ProcStructLock);
+				leader->links.next = (SHM_QUEUE *) *procgloballist;
+				*procgloballist = leader;
+				SpinLockRelease(ProcStructLock);
+			}
+		}
+		else if (leader != MyProc)
+			MyProc->lockGroupLeader = NULL;
+		LWLockRelease(leader->backendLock);
+	}
+
 	/*
 	 * Reset MyLatch to the process local one.  This is so that signal
 	 * handlers et al can continue using the latch after the shared latch
@@ -817,9 +854,20 @@ ProcKill(int code, Datum arg)
 	procgloballist = proc->procgloballist;
 	SpinLockAcquire(ProcStructLock);
 
-	/* Return PGPROC structure (and semaphore) to appropriate freelist */
-	proc->links.next = (SHM_QUEUE *) *procgloballist;
-	*procgloballist = proc;
+	/*
+	 * If we're still a member of a locking group, that means we're a leader
+	 * which has somehow exited before its children.  The last remaining child
+	 * will release our PGPROC.  Otherwise, release it now.
+	 */
+	if (proc->lockGroupLeader == NULL)
+	{
+		/* Since lockGroupLeader is NULL, lockGroupSize should be 0. */
+		Assert(proc->lockGroupSize == 0);
+
+		/* Return PGPROC structure (and semaphore) to appropriate freelist */
+		proc->links.next = (SHM_QUEUE *) *procgloballist;
+		*procgloballist = proc;
+	}
 
 	/* Update shared estimate of spins_per_delay */
 	procglobal->spins_per_delay = update_spins_per_delay(procglobal->spins_per_delay);
@@ -952,9 +1000,31 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
 	bool		allow_autovacuum_cancel = true;
 	int			myWaitStatus;
 	PGPROC	   *proc;
+	PGPROC	   *leader = MyProc->lockGroupLeader;
 	int			i;
 
 	/*
+	 * If group locking is in use, locks held my members of my locking group
+	 * need to be included in myHeldLocks.
+	 */
+	if (leader != NULL)
+	{
+		SHM_QUEUE  *procLocks = &(lock->procLocks);
+		PROCLOCK   *otherproclock;
+
+		otherproclock = (PROCLOCK *)
+			SHMQueueNext(procLocks, procLocks, offsetof(PROCLOCK, lockLink));
+		while (otherproclock != NULL)
+		{
+			if (otherproclock->groupLeader == leader)
+				myHeldLocks |= otherproclock->holdMask;
+			otherproclock = (PROCLOCK *)
+				SHMQueueNext(procLocks, &otherproclock->lockLink,
+							 offsetof(PROCLOCK, lockLink));
+		}
+	}
+
+	/*
 	 * Determine where to add myself in the wait queue.
 	 *
 	 * Normally I should go at the end of the queue.  However, if I already
@@ -978,6 +1048,15 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
 		proc = (PGPROC *) waitQueue->links.next;
 		for (i = 0; i < waitQueue->size; i++)
 		{
+			/*
+			 * If we're part of the same locking group as this waiter, its
+			 * locks neither conflict with ours nor contribute to aheadRequsts.
+			 */
+			if (leader != NULL && leader == proc->lockGroupLeader)
+			{
+				proc = (PGPROC *) proc->links.next;
+				continue;
+			}
 			/* Must he wait for me? */
 			if (lockMethodTable->conflictTab[proc->waitLockMode] & myHeldLocks)
 			{
@@ -1671,3 +1750,61 @@ ProcSendSignal(int pid)
 		SetLatch(&proc->procLatch);
 	}
 }
+
+/*
+ * BecomeLockGroupLeader - designate process as lock group leader
+ *
+ * Once this function has returned, other processes can join the lock group
+ * by calling BecomeLockGroupMember.
+ */
+void
+BecomeLockGroupLeader(void)
+{
+	/* If we already did it, we don't need to do it again. */
+	if (MyProc->lockGroupLeader == MyProc)
+		return;
+
+	/* We had better not be a follower. */
+	Assert(MyProc->lockGroupLeader == NULL);
+
+	/* Create single-member group, containing only ourselves. */
+	LWLockAcquire(MyProc->backendLock, LW_EXCLUSIVE);
+	MyProc->lockGroupLeader = MyProc;
+	MyProc->lockGroupLeaderIdentifier = MyProcPid;
+	MyProc->lockGroupSize = 1;
+	LWLockRelease(MyProc->backendLock);
+}
+
+/*
+ * BecomeLockGroupMember - designate process as lock group member
+ *
+ * This is pretty straightforward except for the possibility that the leader
+ * whose group we're trying to join might exit before we manage to do so;
+ * and the PGPROC might get recycled for an unrelated process.  To avoid
+ * that, we require the caller to pass the PID of the intended PGPROC as
+ * an interlock.  Returns true if we successfully join the intended lock
+ * group, and false if not.
+ */
+bool
+BecomeLockGroupMember(PGPROC *leader, int pid)
+{
+	bool	ok = false;
+
+	/* Group leader can't become member of group */
+	Assert(MyProc != leader);
+
+	/* PID must be valid. */
+	Assert(pid != 0);
+
+	/* Try to join the group. */
+	LWLockAcquire(leader->backendLock, LW_EXCLUSIVE);
+	if (leader->lockGroupLeaderIdentifier == pid)
+	{
+		ok = true;
+		leader->lockGroupSize++;
+		MyProc->lockGroupLeader = leader;
+	}
+	LWLockRelease(leader->backendLock);
+
+	return ok;
+}
diff --git a/src/include/storage/lock.h b/src/include/storage/lock.h
index a9cd08c..fa81003 100644
--- a/src/include/storage/lock.h
+++ b/src/include/storage/lock.h
@@ -346,6 +346,7 @@ typedef struct PROCLOCK
 	PROCLOCKTAG tag;			/* unique identifier of proclock object */
 
 	/* data */
+	PGPROC	   *groupLeader;	/* group leader, or NULL if no lock group */
 	LOCKMASK	holdMask;		/* bitmask for lock types currently held */
 	LOCKMASK	releaseMask;	/* bitmask for lock types to be released */
 	SHM_QUEUE	lockLink;		/* list link in LOCK's list of proclocks */
@@ -457,7 +458,6 @@ typedef enum
 								 * worker */
 } DeadLockState;
 
-
 /*
  * The lockmgr's shared hash tables are partitioned to reduce contention.
  * To determine which partition a given locktag belongs to, compute the tag's
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 3d68017..591e4ae 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -155,6 +155,10 @@ struct PGPROC
 	bool		fpVXIDLock;		/* are we holding a fast-path VXID lock? */
 	LocalTransactionId fpLocalTransactionId;	/* lxid for fast-path VXID
 												 * lock */
+	/* Support for lock groups. */
+	int			lockGroupLeaderIdentifier;	/* MyProcPid, if I'm a leader */
+	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a follower */
+	int			lockGroupSize;		/* # of members, if I'm a leader */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
@@ -272,4 +276,7 @@ extern void LockErrorCleanup(void);
 extern void ProcWaitForSignal(void);
 extern void ProcSendSignal(int pid);
 
+extern void BecomeLockGroupLeader(void);
+extern bool BecomeLockGroupMember(PGPROC *leader, int pid);
+
 #endif   /* PROC_H */
-- 
2.3.8 (Apple Git-58)

0005-Don-t-send-protocol-messages-to-a-shm_mq-that-no-lon.patchapplication/x-patch; name=0005-Don-t-send-protocol-messages-to-a-shm_mq-that-no-lon.patchDownload
From dc9bc8dfe6af880343db930ceb6b13a67451b4e4 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Mon, 5 Oct 2015 13:04:10 -0400
Subject: [PATCH 05/14] Don't send protocol messages to a shm_mq that no longer
 exists.

Commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d introduced a mechanism
for relaying protocol messages from a background worker to another
backend via a shm_mq.  However, there was no provision for shutting
down the communication channel.  Therefore, a protocol message sent
late in the shutdown sequence, such as a DEBUG message resulting from
cranking up log_min_messages, could crash the server.  To fix, install
an on_dsm_detach callback that disables sending messages to the shm_mq
when the associated DSM is detached.
---
 src/backend/access/transam/parallel.c |  2 +-
 src/backend/libpq/pqmq.c              | 28 ++++++++++++++++++++++++++--
 src/include/libpq/pqmq.h              |  2 +-
 3 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 90735df..3b87312 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -870,7 +870,7 @@ ParallelWorkerMain(Datum main_arg)
 					 ParallelWorkerNumber * PARALLEL_ERROR_QUEUE_SIZE);
 	shm_mq_set_sender(mq, MyProc);
 	mqh = shm_mq_attach(mq, seg, NULL);
-	pq_redirect_to_shm_mq(mq, mqh);
+	pq_redirect_to_shm_mq(seg, mqh);
 	pq_set_parallel_master(fps->parallel_master_pid,
 						   fps->parallel_master_backend_id);
 
diff --git a/src/backend/libpq/pqmq.c b/src/backend/libpq/pqmq.c
index 9ca6b7c..0a3c2b7 100644
--- a/src/backend/libpq/pqmq.c
+++ b/src/backend/libpq/pqmq.c
@@ -26,6 +26,7 @@ static bool pq_mq_busy = false;
 static pid_t pq_mq_parallel_master_pid = 0;
 static pid_t pq_mq_parallel_master_backend_id = InvalidBackendId;
 
+static void pq_cleanup_redirect_to_shm_mq(dsm_segment *seg, Datum arg);
 static void mq_comm_reset(void);
 static int	mq_flush(void);
 static int	mq_flush_if_writable(void);
@@ -51,13 +52,26 @@ static PQcommMethods PqCommMqMethods = {
  * message queue.
  */
 void
-pq_redirect_to_shm_mq(shm_mq *mq, shm_mq_handle *mqh)
+pq_redirect_to_shm_mq(dsm_segment *seg, shm_mq_handle *mqh)
 {
 	PqCommMethods = &PqCommMqMethods;
-	pq_mq = mq;
+	pq_mq = shm_mq_get_queue(mqh);
 	pq_mq_handle = mqh;
 	whereToSendOutput = DestRemote;
 	FrontendProtocol = PG_PROTOCOL_LATEST;
+	on_dsm_detach(seg, pq_cleanup_redirect_to_shm_mq, (Datum) 0);
+}
+
+/*
+ * When the DSM that contains our shm_mq goes away, we need to stop sending
+ * messages to it.
+ */
+static void
+pq_cleanup_redirect_to_shm_mq(dsm_segment *seg, Datum arg)
+{
+	pq_mq = NULL;
+	pq_mq_handle = NULL;
+	whereToSendOutput = DestNone;
 }
 
 /*
@@ -123,9 +137,19 @@ mq_putmessage(char msgtype, const char *s, size_t len)
 		if (pq_mq != NULL)
 			shm_mq_detach(pq_mq);
 		pq_mq = NULL;
+		pq_mq_handle = NULL;
 		return EOF;
 	}
 
+	/*
+	 * If the message queue is already gone, just ignore the message. This
+	 * doesn't necessarily indicate a problem; for example, DEBUG messages
+	 * can be generated late in the shutdown sequence, after all DSMs have
+	 * already been detached.
+	 */
+	if (pq_mq == NULL)
+		return 0;
+
 	pq_mq_busy = true;
 
 	iov[0].data = &msgtype;
diff --git a/src/include/libpq/pqmq.h b/src/include/libpq/pqmq.h
index 9017565..97f17da 100644
--- a/src/include/libpq/pqmq.h
+++ b/src/include/libpq/pqmq.h
@@ -16,7 +16,7 @@
 #include "lib/stringinfo.h"
 #include "storage/shm_mq.h"
 
-extern void pq_redirect_to_shm_mq(shm_mq *, shm_mq_handle *);
+extern void pq_redirect_to_shm_mq(dsm_segment *seg, shm_mq_handle *mqh);
 extern void pq_set_parallel_master(pid_t pid, BackendId backend_id);
 
 extern void pq_parse_errornotice(StringInfo str, ErrorData *edata);
-- 
2.3.8 (Apple Git-58)

0006-Transfer-current-command-counter-ID-to-parallel-work.patchapplication/x-patch; name=0006-Transfer-current-command-counter-ID-to-parallel-work.patchDownload
From 0dad69fa294e1704fc9c3e9a7c8c890c51b3fa33 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Mon, 5 Oct 2015 18:09:02 -0400
Subject: [PATCH 06/14] Transfer current command counter ID to parallel
 workers.

Commit 924bcf4f16d54c55310b28f77686608684734f42 correctly forbade
parallel workers to modify the command counter while in parallel mode,
but it inexplicably neglected to actually transfer the current command
counter from leader to workers.  This can result in the workers seeing
a different set of tuples from the master, which is bad.  Repair.
---
 src/backend/access/transam/xact.c | 46 +++++++++++++++++++++------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index e8aafba..3e24800 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -4786,8 +4786,8 @@ Size
 EstimateTransactionStateSpace(void)
 {
 	TransactionState s;
-	Size		nxids = 5;		/* iso level, deferrable, top & current XID,
-								 * XID count */
+	Size		nxids = 6;		/* iso level, deferrable, top & current XID,
+								 * command counter, XID count */
 
 	for (s = CurrentTransactionState; s != NULL; s = s->parent)
 	{
@@ -4807,12 +4807,13 @@ EstimateTransactionStateSpace(void)
  *
  * We need to save and restore XactDeferrable, XactIsoLevel, and the XIDs
  * associated with this transaction.  The first eight bytes of the result
- * contain XactDeferrable and XactIsoLevel; the next eight bytes contain the
- * XID of the top-level transaction and the XID of the current transaction
- * (or, in each case, InvalidTransactionId if none).  After that, the next 4
- * bytes contain a count of how many additional XIDs follow; this is followed
- * by all of those XIDs one after another.  We emit the XIDs in sorted order
- * for the convenience of the receiving process.
+ * contain XactDeferrable and XactIsoLevel; the next twelve bytes contain the
+ * XID of the top-level transaction, the XID of the current transaction
+ * (or, in each case, InvalidTransactionId if none), and the current command
+ * counter.  After that, the next 4 bytes contain a count of how many
+ * additional XIDs follow; this is followed by all of those XIDs one after
+ * another.  We emit the XIDs in sorted order for the convenience of the
+ * receiving process.
  */
 void
 SerializeTransactionState(Size maxsize, char *start_address)
@@ -4820,14 +4821,16 @@ SerializeTransactionState(Size maxsize, char *start_address)
 	TransactionState s;
 	Size		nxids = 0;
 	Size		i = 0;
+	Size		c = 0;
 	TransactionId *workspace;
 	TransactionId *result = (TransactionId *) start_address;
 
-	Assert(maxsize >= 5 * sizeof(TransactionId));
-	result[0] = (TransactionId) XactIsoLevel;
-	result[1] = (TransactionId) XactDeferrable;
-	result[2] = XactTopTransactionId;
-	result[3] = CurrentTransactionState->transactionId;
+	result[c++] = (TransactionId) XactIsoLevel;
+	result[c++] = (TransactionId) XactDeferrable;
+	result[c++] = XactTopTransactionId;
+	result[c++] = CurrentTransactionState->transactionId;
+	result[c++] = (TransactionId) currentCommandId;
+	Assert(maxsize >= c * sizeof(TransactionId));
 
 	/*
 	 * If we're running in a parallel worker and launching a parallel worker
@@ -4836,9 +4839,9 @@ SerializeTransactionState(Size maxsize, char *start_address)
 	 */
 	if (nParallelCurrentXids > 0)
 	{
-		Assert(maxsize > (nParallelCurrentXids + 4) * sizeof(TransactionId));
-		result[4] = nParallelCurrentXids;
-		memcpy(&result[5], ParallelCurrentXids,
+		result[c++] = nParallelCurrentXids;
+		Assert(maxsize >= (nParallelCurrentXids + c) * sizeof(TransactionId));
+		memcpy(&result[c], ParallelCurrentXids,
 			   nParallelCurrentXids * sizeof(TransactionId));
 		return;
 	}
@@ -4853,7 +4856,7 @@ SerializeTransactionState(Size maxsize, char *start_address)
 			nxids = add_size(nxids, 1);
 		nxids = add_size(nxids, s->nChildXids);
 	}
-	Assert(nxids * sizeof(TransactionId) < maxsize);
+	Assert((c + 1 + nxids) * sizeof(TransactionId) <= maxsize);
 
 	/* Copy them to our scratch space. */
 	workspace = palloc(nxids * sizeof(TransactionId));
@@ -4871,8 +4874,8 @@ SerializeTransactionState(Size maxsize, char *start_address)
 	qsort(workspace, nxids, sizeof(TransactionId), xidComparator);
 
 	/* Copy data into output area. */
-	result[4] = (TransactionId) nxids;
-	memcpy(&result[5], workspace, nxids * sizeof(TransactionId));
+	result[c++] = (TransactionId) nxids;
+	memcpy(&result[c], workspace, nxids * sizeof(TransactionId));
 }
 
 /*
@@ -4892,8 +4895,9 @@ StartParallelWorkerTransaction(char *tstatespace)
 	XactDeferrable = (bool) tstate[1];
 	XactTopTransactionId = tstate[2];
 	CurrentTransactionState->transactionId = tstate[3];
-	nParallelCurrentXids = (int) tstate[4];
-	ParallelCurrentXids = &tstate[5];
+	currentCommandId = tstate[4];
+	nParallelCurrentXids = (int) tstate[5];
+	ParallelCurrentXids = &tstate[6];
 
 	CurrentTransactionState->blockState = TBLOCK_PARALLEL_INPROGRESS;
 }
-- 
2.3.8 (Apple Git-58)

0007-Tighten-up-application-of-parallel-mode-checks.patchapplication/x-patch; name=0007-Tighten-up-application-of-parallel-mode-checks.patchDownload
From b7e8d5f88e5d8334ed7ef75d21f9b3599201b06f Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Fri, 2 Oct 2015 19:12:18 -0400
Subject: [PATCH 07/14] Tighten up application of parallel mode checks.

Commit 924bcf4f16d54c55310b28f77686608684734f42 failed to enforce
parallel mode checks during the commit of a parallel worker, because
we exited parallel mode prior to ending the transaction so that we
could pop the active snapshot.  Re-establish parallel mode during
parallel worker commit.  Without this, it's far too easy for unsafe
actions during the pre-commit sequence to crash the server instead of
hitting the error checks as intended.

Just to be extra paranoid, adjust a couple of the sanity checks in
xact.c to check not only IsInParallelMode() but also
IsParallelWorker().
---
 src/backend/access/transam/xact.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 3e24800..47312f6 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -497,7 +497,7 @@ AssignTransactionId(TransactionState s)
 	 * Workers synchronize transaction state at the beginning of each parallel
 	 * operation, so we can't account for new XIDs at this point.
 	 */
-	if (IsInParallelMode())
+	if (IsInParallelMode() || IsParallelWorker())
 		elog(ERROR, "cannot assign XIDs during a parallel operation");
 
 	/*
@@ -931,7 +931,7 @@ CommandCounterIncrement(void)
 		 * parallel operation, so we can't account for new commands after that
 		 * point.
 		 */
-		if (IsInParallelMode())
+		if (IsInParallelMode() || IsParallelWorker())
 			elog(ERROR, "cannot start commands during a parallel operation");
 
 		currentCommandId += 1;
@@ -1927,6 +1927,10 @@ CommitTransaction(void)
 
 	is_parallel_worker = (s->blockState == TBLOCK_PARALLEL_INPROGRESS);
 
+	/* Enforce parallel mode restrictions during parallel worker commit. */
+	if (is_parallel_worker)
+		EnterParallelMode();
+
 	ShowTransactionState("CommitTransaction");
 
 	/*
@@ -1971,10 +1975,7 @@ CommitTransaction(void)
 
 	/* If we might have parallel workers, clean them up now. */
 	if (IsInParallelMode())
-	{
 		AtEOXact_Parallel(true);
-		s->parallelModeLevel = 0;
-	}
 
 	/* Shut down the deferred-trigger manager */
 	AfterTriggerEndXact(true);
@@ -2013,6 +2014,7 @@ CommitTransaction(void)
 	 * commit processing
 	 */
 	s->state = TRANS_COMMIT;
+	s->parallelModeLevel = 0;
 
 	if (!is_parallel_worker)
 	{
-- 
2.3.8 (Apple Git-58)

0008-Invalidate-caches-after-cranking-up-a-parallel-worke.patchapplication/x-patch; name=0008-Invalidate-caches-after-cranking-up-a-parallel-worke.patchDownload
From 2ef2d8d91d9cb455cf5b41b0c0f4ef273ac3fdd5 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Sat, 3 Oct 2015 17:45:38 -0400
Subject: [PATCH 08/14] Invalidate caches after cranking up a parallel worker
 transaction.

Starting a parallel worker transaction changes our notion of which XIDs
are in-progress or committed, and our notion of the current command
counter ID.  Therefore, our view of these caches prior to starting
this transaction may no longer valid.  Defend against that by clearing
them.

This fixes a bug in commit 924bcf4f16d54c55310b28f77686608684734f42.
---
 src/backend/access/transam/parallel.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 3b87312..a553dca 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -28,6 +28,7 @@
 #include "tcop/tcopprot.h"
 #include "utils/combocid.h"
 #include "utils/guc.h"
+#include "utils/inval.h"
 #include "utils/memutils.h"
 #include "utils/resowner.h"
 #include "utils/snapmgr.h"
@@ -944,6 +945,12 @@ ParallelWorkerMain(Datum main_arg)
 	Assert(asnapspace != NULL);
 	PushActiveSnapshot(RestoreSnapshot(asnapspace));
 
+	/*
+	 * We've changed which tuples we can see, and must therefore invalidate
+	 * system caches.
+	 */
+	InvalidateSystemCaches();
+
 	/* Restore user ID and security context. */
 	SetUserIdAndSecContext(fps->current_user_id, fps->sec_context);
 
-- 
2.3.8 (Apple Git-58)

0009-Fix-a-problem-with-parallel-workers-being-unable-to-.patchapplication/x-patch; name=0009-Fix-a-problem-with-parallel-workers-being-unable-to-.patchDownload
From 4ac3ae2e4773da358a73a7831d9fff2cb5f4a8cd Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Mon, 5 Oct 2015 12:19:32 -0400
Subject: [PATCH 09/14] Fix a problem with parallel workers being unable to
 restore role.

check_role() tries to verify that the user has permission to become the
requested role, but this is inappropriate in a parallel worker, which
needs to exactly recreate the master's authorization settings.  So skip
the check in that case.

This fixes a bug in commit 924bcf4f16d54c55310b28f77686608684734f42.
---
 src/backend/access/transam/parallel.c | 7 +++++++
 src/backend/commands/variable.c       | 8 ++++++--
 src/include/access/parallel.h         | 1 +
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index a553dca..3c92a28 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -96,6 +96,9 @@ int			ParallelWorkerNumber = -1;
 /* Is there a parallel message pending which we need to receive? */
 bool		ParallelMessagePending = false;
 
+/* Are we initializing a parallel worker? */
+bool		InitializingParallelWorker = false;
+
 /* Pointer to our fixed parallel state. */
 static FixedParallelState *MyFixedParallelState;
 
@@ -818,6 +821,9 @@ ParallelWorkerMain(Datum main_arg)
 	char	   *tstatespace;
 	StringInfoData msgbuf;
 
+	/* Set flag to indicate that we're initializing a parallel worker. */
+	InitializingParallelWorker = true;
+
 	/* Establish signal handlers. */
 	pqsignal(SIGTERM, die);
 	BackgroundWorkerUnblockSignals();
@@ -958,6 +964,7 @@ ParallelWorkerMain(Datum main_arg)
 	 * We've initialized all of our state now; nothing should change
 	 * hereafter.
 	 */
+	InitializingParallelWorker = false;
 	EnterParallelMode();
 
 	/*
diff --git a/src/backend/commands/variable.c b/src/backend/commands/variable.c
index 2d0a44e..16c122a 100644
--- a/src/backend/commands/variable.c
+++ b/src/backend/commands/variable.c
@@ -19,6 +19,7 @@
 #include <ctype.h>
 
 #include "access/htup_details.h"
+#include "access/parallel.h"
 #include "access/xact.h"
 #include "access/xlog.h"
 #include "catalog/pg_authid.h"
@@ -877,9 +878,12 @@ check_role(char **newval, void **extra, GucSource source)
 		ReleaseSysCache(roleTup);
 
 		/*
-		 * Verify that session user is allowed to become this role
+		 * Verify that session user is allowed to become this role, but
+		 * skip this in parallel mode, where we must blindly recreate the
+		 * parallel leader's state.
 		 */
-		if (!is_member_of_role(GetSessionUserId(), roleid))
+		if (!InitializingParallelWorker &&
+			!is_member_of_role(GetSessionUserId(), roleid))
 		{
 			GUC_check_errcode(ERRCODE_INSUFFICIENT_PRIVILEGE);
 			GUC_check_errmsg("permission denied to set role \"%s\"",
diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h
index b029c1e..44f0616 100644
--- a/src/include/access/parallel.h
+++ b/src/include/access/parallel.h
@@ -48,6 +48,7 @@ typedef struct ParallelContext
 
 extern bool ParallelMessagePending;
 extern int	ParallelWorkerNumber;
+extern bool InitializingParallelWorker;
 
 #define		IsParallelWorker()		(ParallelWorkerNumber >= 0)
 
-- 
2.3.8 (Apple Git-58)

0010-Prohibit-parallel-query-when-the-isolation-level-is-.patchapplication/x-patch; name=0010-Prohibit-parallel-query-when-the-isolation-level-is-.patchDownload
From 0c97636613509f289b3699e25af2c6c5b80e90ad Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Sun, 4 Oct 2015 01:11:20 -0400
Subject: [PATCH 10/14] Prohibit parallel query when the isolation level is
 serializable.

In order for this to be safe, the code which hands true serializability
will need to taught that the SIRead locks taken by a parallel worker
pertain to the same transaction as those taken by the parallel leader.
Some further changes may be needed as well.  Until the necessary
adaptations are made, don't generate parallel plans in serializable
mode, and if a previously-generated parallel plan is used after
serializable mode has been activated, run it serially.

This fixes a bug in commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b.
---
 src/backend/access/transam/parallel.c |  8 ++++++++
 src/backend/optimizer/plan/planner.c  | 10 ++++++++++
 2 files changed, 18 insertions(+)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 3c92a28..edbbf9e 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -135,6 +135,14 @@ CreateParallelContext(parallel_worker_main_type entrypoint, int nworkers)
 	if (dynamic_shared_memory_type == DSM_IMPL_NONE)
 		nworkers = 0;
 
+	/*
+	 * If we are running under serializable isolation, we can't use
+	 * parallel workers, at least not until somebody enhances that mechanism
+	 * to be parallel-aware.
+	 */
+	if (IsolationIsSerializable())
+		nworkers = 0;
+
 	/* We might be running in a short-lived memory context. */
 	oldcontext = MemoryContextSwitchTo(TopTransactionContext);
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index cec2904..4a9828a 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -20,6 +20,7 @@
 
 #include "access/htup_details.h"
 #include "access/parallel.h"
+#include "access/xact.h"
 #include "executor/executor.h"
 #include "executor/nodeAgg.h"
 #include "foreign/fdwapi.h"
@@ -245,11 +246,20 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 	 * a parallel worker.  We might eventually be able to relax this
 	 * restriction, but for now it seems best not to have parallel workers
 	 * trying to create their own parallel workers.
+	 *
+	 * We can't use parallelism in serializable mode because the predicate
+	 * locking code is not parallel-aware.  It's not catastrophic if someone
+	 * tries to run a parallel plan in serializable mode; it just won't get
+	 * any workers and will run serially.  But it seems like a good heuristic
+	 * to assume that the same serialization level will be in effect at plan
+	 * time and execution time, so don't generate a parallel plan if we're
+	 * in serializable mode.
 	 */
 	glob->parallelModeOK = (cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
 		IsUnderPostmaster && dynamic_shared_memory_type != DSM_IMPL_NONE &&
 		parse->commandType == CMD_SELECT && !parse->hasModifyingCTE &&
 		parse->utilityStmt == NULL && !IsParallelWorker() &&
+		!IsolationIsSerializable() &&
 		!check_parallel_safety((Node *) parse, false);
 
 	/*
-- 
2.3.8 (Apple Git-58)

0001-Test-code.patchapplication/x-patch; name=0001-Test-code.patchDownload
From 8540a95c8013a07cd175bab7a8d971663a9a6d09 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Wed, 30 Sep 2015 18:35:40 -0400
Subject: [PATCH 01/14] Test code.

---
 src/backend/access/transam/parallel.c |  2 +
 src/backend/commands/explain.c        | 12 ++++-
 src/backend/optimizer/plan/planner.c  | 87 +++++++++++++++++++++++++++++++++++
 3 files changed, 100 insertions(+), 1 deletion(-)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 29d6ed5..3041dab 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -993,7 +993,9 @@ ParallelExtensionTrampoline(dsm_segment *seg, shm_toc *toc)
 static void
 ParallelErrorContext(void *arg)
 {
+#if 0
 	errcontext("parallel worker, pid %d", *(int32 *) arg);
+#endif
 }
 
 /*
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 7fb8a14..8612430 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -571,6 +571,7 @@ void
 ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc)
 {
 	Bitmapset  *rels_used = NULL;
+	PlanState *ps;
 
 	Assert(queryDesc->plannedstmt != NULL);
 	es->pstmt = queryDesc->plannedstmt;
@@ -579,7 +580,16 @@ ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc)
 	es->rtable_names = select_rtable_names_for_explain(es->rtable, rels_used);
 	es->deparse_cxt = deparse_context_for_plan_rtable(es->rtable,
 													  es->rtable_names);
-	ExplainNode(queryDesc->planstate, NIL, NULL, NULL, es);
+	/*
+	 * XXX.  Just for testing purposes, suppress the display of a toplevel
+	 * gather node, so that we can run the regression tests with Gather
+	 * nodes forcibly inserted without getting test failures due to different
+	 * EXPLAIN output.
+	 */
+	ps = queryDesc->planstate;
+	if (IsA(ps, GatherState))
+		ps = outerPlanState(ps);
+	ExplainNode(ps, NIL, NULL, NULL, es);
 }
 
 /*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index e1ee67c..76ad8b3 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -47,6 +47,7 @@
 #include "storage/dsm_impl.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
+#include "utils/syscache.h"
 
 
 /* GUC parameter */
@@ -160,6 +161,40 @@ planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 	return result;
 }
 
+/* This code is crap, just for testing.  Don't confuse it with good code. */
+static bool
+rte_check_safety(RangeTblEntry *rte)
+{
+	HeapTuple tp;
+	Form_pg_class reltup;
+	bool retval;
+	ListCell *lc;
+
+	switch (rte->rtekind)
+	{
+		case RTE_RELATION:
+			tp = SearchSysCache1(RELOID, ObjectIdGetDatum(rte->relid));
+			if (!HeapTupleIsValid(tp))
+				elog(ERROR, "cache lookup failed for relation %u",
+					 rte->relid);
+			reltup = (Form_pg_class) GETSTRUCT(tp);
+			retval = (reltup->relpersistence != RELPERSISTENCE_TEMP);
+			ReleaseSysCache(tp);
+			return retval;
+
+		case RTE_SUBQUERY:
+			foreach (lc, rte->subquery->rtable)
+			{
+				RangeTblEntry *rte2 = lfirst(lc);
+				if (!rte_check_safety(rte2))
+					return false;
+			}
+
+		default:
+			return true;
+	}
+}
+
 PlannedStmt *
 standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 {
@@ -284,6 +319,58 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 			top_plan = materialize_finished_plan(top_plan);
 	}
 
+	/* XXX: Force a gather plan for testing purposes. */
+	if (glob->parallelModeOK)
+	{
+		bool		use_gather = true;
+
+		/* We don't copy subplans to workers. */
+		if (glob->subplans != NIL)
+			use_gather = false;
+
+		/* Parallel mode doesn't currently support temporary tables. */
+		if (use_gather)
+		{
+			ListCell   *lc;
+			ListCell   *l;
+
+			foreach(lc, root->parse->rtable)
+			{
+				RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc);
+				if (!rte_check_safety(rte))
+					use_gather = false;
+			}
+
+			foreach(l, glob->subroots)
+			{
+				PlannerInfo *subroot = (PlannerInfo *) lfirst(l);
+
+				foreach(lc, subroot->parse->rtable)
+				{
+					RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc);
+
+					if (!rte_check_safety(rte))
+						use_gather = false;
+				}
+			}
+		}
+
+		/* No disqualifying conditions?  Then do it! */
+		if (use_gather)
+		{
+			Gather	   *gather = makeNode(Gather);
+
+			gather->plan.targetlist = top_plan->targetlist;
+			gather->plan.qual = NIL;
+			gather->plan.lefttree = top_plan;
+			gather->plan.righttree = NULL;
+			gather->num_workers = 1;
+			gather->single_copy = true;
+			root->glob->parallelModeNeeded = true;
+			top_plan = &gather->plan;
+		}
+	}
+
 	/*
 	 * If any Params were generated, run through the plan tree and compute
 	 * each plan node's extParam/allParam sets.  Ideally we'd merge this into
-- 
2.3.8 (Apple Git-58)

0002-contain_parallel_unsafe-check_parallel_safety.patchapplication/x-patch; name=0002-contain_parallel_unsafe-check_parallel_safety.patchDownload
From 601eef8550656be860699915b80dd01921650ad4 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Fri, 2 Oct 2015 23:57:46 -0400
Subject: [PATCH 02/14] contain_parallel_unsafe -> check_parallel_safety.

enhance check_parallel_safety to detect use of temporary type ids.
---
 src/backend/optimizer/plan/planner.c |  2 +-
 src/backend/optimizer/util/clauses.c | 75 ++++++++++++++++++++++++++++--------
 src/backend/utils/cache/lsyscache.c  | 22 +++++++++++
 src/include/optimizer/clauses.h      |  2 +-
 src/include/utils/lsyscache.h        |  1 +
 5 files changed, 85 insertions(+), 17 deletions(-)

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 76ad8b3..c502377 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -250,7 +250,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 		IsUnderPostmaster && dynamic_shared_memory_type != DSM_IMPL_NONE &&
 		parse->commandType == CMD_SELECT && !parse->hasModifyingCTE &&
 		parse->utilityStmt == NULL && !IsParallelWorker() &&
-		!contain_parallel_unsafe((Node *) parse);
+		!check_parallel_safety((Node *) parse, true);
 
 	/*
 	 * glob->parallelModeOK should tell us whether it's necessary to impose
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index f2c8551..f4d8f98 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -21,6 +21,7 @@
 
 #include "access/htup_details.h"
 #include "catalog/pg_aggregate.h"
+#include "catalog/pg_class.h"
 #include "catalog/pg_language.h"
 #include "catalog/pg_operator.h"
 #include "catalog/pg_proc.h"
@@ -87,6 +88,11 @@ typedef struct
 	char	   *prosrc;
 } inline_error_callback_arg;
 
+typedef struct
+{
+	bool		allow_restricted;
+} check_parallel_safety_arg;
+
 static bool contain_agg_clause_walker(Node *node, void *context);
 static bool count_agg_clauses_walker(Node *node,
 						 count_agg_clauses_context *context);
@@ -96,7 +102,11 @@ static bool contain_subplans_walker(Node *node, void *context);
 static bool contain_mutable_functions_walker(Node *node, void *context);
 static bool contain_volatile_functions_walker(Node *node, void *context);
 static bool contain_volatile_functions_not_nextval_walker(Node *node, void *context);
-static bool contain_parallel_unsafe_walker(Node *node, void *context);
+static bool check_parallel_safety_walker(Node *node,
+				check_parallel_safety_arg *context);
+static bool parallel_too_dangerous(char proparallel,
+				check_parallel_safety_arg *context);
+static bool typeid_is_temp(Oid typeid);
 static bool contain_nonstrict_functions_walker(Node *node, void *context);
 static bool contain_leaked_vars_walker(Node *node, void *context);
 static Relids find_nonnullable_rels_walker(Node *node, bool top_level);
@@ -1204,13 +1214,16 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
  *****************************************************************************/
 
 bool
-contain_parallel_unsafe(Node *node)
+check_parallel_safety(Node *node, bool allow_restricted)
 {
-	return contain_parallel_unsafe_walker(node, NULL);
+	check_parallel_safety_arg	context;
+
+	context.allow_restricted = allow_restricted;
+	return check_parallel_safety_walker(node, &context);
 }
 
 static bool
-contain_parallel_unsafe_walker(Node *node, void *context)
+check_parallel_safety_walker(Node *node, check_parallel_safety_arg *context)
 {
 	if (node == NULL)
 		return false;
@@ -1218,7 +1231,7 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 	{
 		FuncExpr   *expr = (FuncExpr *) node;
 
-		if (func_parallel(expr->funcid) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(expr->funcid), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1227,7 +1240,7 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 		OpExpr	   *expr = (OpExpr *) node;
 
 		set_opfuncid(expr);
-		if (func_parallel(expr->opfuncid) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(expr->opfuncid), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1236,7 +1249,7 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 		DistinctExpr *expr = (DistinctExpr *) node;
 
 		set_opfuncid((OpExpr *) expr);	/* rely on struct equivalence */
-		if (func_parallel(expr->opfuncid) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(expr->opfuncid), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1245,7 +1258,7 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 		NullIfExpr *expr = (NullIfExpr *) node;
 
 		set_opfuncid((OpExpr *) expr);	/* rely on struct equivalence */
-		if (func_parallel(expr->opfuncid) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(expr->opfuncid), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1254,7 +1267,7 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 		ScalarArrayOpExpr *expr = (ScalarArrayOpExpr *) node;
 
 		set_sa_opfuncid(expr);
-		if (func_parallel(expr->opfuncid) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(expr->opfuncid), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1268,12 +1281,12 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 		/* check the result type's input function */
 		getTypeInputInfo(expr->resulttype,
 						 &iofunc, &typioparam);
-		if (func_parallel(iofunc) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(iofunc), context))
 			return true;
 		/* check the input type's output function */
 		getTypeOutputInfo(exprType((Node *) expr->arg),
 						  &iofunc, &typisvarlena);
-		if (func_parallel(iofunc) == PROPARALLEL_UNSAFE)
+		if (parallel_too_dangerous(func_parallel(iofunc), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1282,7 +1295,7 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 		ArrayCoerceExpr *expr = (ArrayCoerceExpr *) node;
 
 		if (OidIsValid(expr->elemfuncid) &&
-			func_parallel(expr->elemfuncid) == PROPARALLEL_UNSAFE)
+			parallel_too_dangerous(func_parallel(expr->elemfuncid), context))
 			return true;
 		/* else fall through to check args */
 	}
@@ -1294,11 +1307,23 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 
 		foreach(opid, rcexpr->opnos)
 		{
-			if (op_volatile(lfirst_oid(opid)) == PROPARALLEL_UNSAFE)
+			if (parallel_too_dangerous(op_volatile(lfirst_oid(opid)), context))
 				return true;
 		}
 		/* else fall through to check args */
 	}
+	else if (IsA(node, RowExpr))
+	{
+		RowExpr *rexpr = (RowExpr *) node;
+		if (!context->allow_restricted && typeid_is_temp(rexpr->row_typeid))
+			return true;
+	}
+	else if (IsA(node, ArrayExpr))
+	{
+		ArrayExpr *aexpr = (ArrayExpr *) node;
+		if (!context->allow_restricted && typeid_is_temp(aexpr->array_typeid))
+			return true;
+	}
 	else if (IsA(node, Query))
 	{
 		Query *query = (Query *) node;
@@ -1308,14 +1333,34 @@ contain_parallel_unsafe_walker(Node *node, void *context)
 
 		/* Recurse into subselects */
 		return query_tree_walker(query,
-								 contain_parallel_unsafe_walker,
+								 check_parallel_safety_walker,
 								 context, 0);
 	}
 	return expression_tree_walker(node,
-								  contain_parallel_unsafe_walker,
+								  check_parallel_safety_walker,
 								  context);
 }
 
+static bool
+parallel_too_dangerous(char proparallel, check_parallel_safety_arg *context)
+{
+	if (context->allow_restricted)
+		return proparallel == PROPARALLEL_UNSAFE;
+	else
+		return proparallel != PROPARALLEL_SAFE;
+}
+
+static bool
+typeid_is_temp(Oid typeid)
+{
+	Oid				relid = get_typ_typrelid(typeid);
+
+	if (!OidIsValid(relid))
+		return false;
+
+	return (get_rel_persistence(relid) == RELPERSISTENCE_TEMP);
+}
+
 /*****************************************************************************
  *		Check clauses for nonstrict functions
  *****************************************************************************/
diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c
index 8d1cdf1..093da76 100644
--- a/src/backend/utils/cache/lsyscache.c
+++ b/src/backend/utils/cache/lsyscache.c
@@ -1787,6 +1787,28 @@ get_rel_tablespace(Oid relid)
 		return InvalidOid;
 }
 
+/*
+ * get_rel_persistence
+ *
+ *		Returns the relpersistence associated with a given relation.
+ */
+char
+get_rel_persistence(Oid relid)
+{
+	HeapTuple		tp;
+	Form_pg_class	reltup;
+	char 			result;
+
+	tp = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tp))
+		elog(ERROR, "cache lookup failed for relation %u", relid);
+	reltup = (Form_pg_class) GETSTRUCT(tp);
+	result = reltup->relpersistence;
+	ReleaseSysCache(tp);
+
+	return result;
+}
+
 
 /*				---------- TRANSFORM CACHE ----------						 */
 
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 5ac79b1..81a4b8f 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -62,7 +62,7 @@ extern bool contain_subplans(Node *clause);
 extern bool contain_mutable_functions(Node *clause);
 extern bool contain_volatile_functions(Node *clause);
 extern bool contain_volatile_functions_not_nextval(Node *clause);
-extern bool contain_parallel_unsafe(Node *node);
+extern bool check_parallel_safety(Node *node, bool allow_restricted);
 extern bool contain_nonstrict_functions(Node *clause);
 extern bool contain_leaked_vars(Node *clause);
 
diff --git a/src/include/utils/lsyscache.h b/src/include/utils/lsyscache.h
index 450d9fe..dcc421f 100644
--- a/src/include/utils/lsyscache.h
+++ b/src/include/utils/lsyscache.h
@@ -103,6 +103,7 @@ extern Oid	get_rel_namespace(Oid relid);
 extern Oid	get_rel_type_id(Oid relid);
 extern char get_rel_relkind(Oid relid);
 extern Oid	get_rel_tablespace(Oid relid);
+extern char get_rel_persistence(Oid relid);
 extern Oid	get_transform_fromsql(Oid typid, Oid langid, List *trftypes);
 extern Oid	get_transform_tosql(Oid typid, Oid langid, List *trftypes);
 extern bool get_typisdefined(Oid typid);
-- 
2.3.8 (Apple Git-58)

0003-Temporary-hack-to-reduce-testing-failures.patchapplication/x-patch; name=0003-Temporary-hack-to-reduce-testing-failures.patchDownload
From 7f47db8fd1f82e7000893cc227998f7f99a41b41 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Sat, 3 Oct 2015 00:11:45 -0400
Subject: [PATCH 03/14] Temporary hack to reduce testing failures.

---
 src/backend/optimizer/plan/planner.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index c502377..cec2904 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -250,7 +250,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 		IsUnderPostmaster && dynamic_shared_memory_type != DSM_IMPL_NONE &&
 		parse->commandType == CMD_SELECT && !parse->hasModifyingCTE &&
 		parse->utilityStmt == NULL && !IsParallelWorker() &&
-		!check_parallel_safety((Node *) parse, true);
+		!check_parallel_safety((Node *) parse, false);
 
 	/*
 	 * glob->parallelModeOK should tell us whether it's necessary to impose
-- 
2.3.8 (Apple Git-58)

0011-Mark-more-functions-parallel-restricted-or-parallel-.patchapplication/x-patch; name=0011-Mark-more-functions-parallel-restricted-or-parallel-.patchDownload
From ff483195182e1b6f0bebf04c2c897154941296ab Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Fri, 2 Oct 2015 20:04:31 -0400
Subject: [PATCH 11/14] Mark more functions parallel-restricted or
 parallel-unsafe.

Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b was overoptimistic
about the degree of safety associated with running various functions
in parallel mode.  Functions that take a table name or OID as an
argument are at least parallel-restricted, because the table might be
temporary, and we currently don't allow parallel workers to touch
temporary tables.  Functions that take a query as an argument are
outright unsafe, because the query could be anything, including a
parallel-unsafe query.

Also, the queue of pending notifications is backend-private, so adding
to it from a worker doesn't behave correctly.  We could fix this by
transferring the worker's queue of pending notifications to the master
during worker cleanup, but that seems like more trouble than it's
worth for now.
---
 src/backend/commands/async.c  |  3 +++
 src/include/catalog/pg_proc.h | 40 ++++++++++++++++++++--------------------
 2 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index f2b9a74..3657d69 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -544,6 +544,9 @@ Async_Notify(const char *channel, const char *payload)
 	Notification *n;
 	MemoryContext oldcontext;
 
+	if (IsInParallelMode())
+		elog(ERROR, "cannot send notifications during a parallel operation");
+
 	if (Trace_notify)
 		elog(DEBUG1, "Async_Notify(%s)", channel);
 
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index eb55b3a..f688454 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2032,9 +2032,9 @@ DATA(insert OID = 1639 (  oidge				   PGNSP PGUID 12 1 0 0 0 f f f t t f i s 2 0
 /* System-view support functions */
 DATA(insert OID = 1573 (  pg_get_ruledef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 25 "26" _null_ _null_ _null_ _null_ _null_ pg_get_ruledef _null_ _null_ _null_ ));
 DESCR("source text of a rule");
-DATA(insert OID = 1640 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 25 "25" _null_ _null_ _null_ _null_ _null_ pg_get_viewdef_name _null_ _null_ _null_ ));
+DATA(insert OID = 1640 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 25 "25" _null_ _null_ _null_ _null_ _null_ pg_get_viewdef_name _null_ _null_ _null_ ));
 DESCR("select statement of a view");
-DATA(insert OID = 1641 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 25 "26" _null_ _null_ _null_ _null_ _null_ pg_get_viewdef _null_ _null_ _null_ ));
+DATA(insert OID = 1641 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 25 "26" _null_ _null_ _null_ _null_ _null_ pg_get_viewdef _null_ _null_ _null_ ));
 DESCR("select statement of a view");
 DATA(insert OID = 1642 (  pg_get_userbyid	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 19 "26" _null_ _null_ _null_ _null_ _null_ pg_get_userbyid _null_ _null_ _null_ ));
 DESCR("role name by OID (with fallback)");
@@ -4036,11 +4036,11 @@ DESCR("I/O");
 /* System-view support functions with pretty-print option */
 DATA(insert OID = 2504 (  pg_get_ruledef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 2 0 25 "26 16" _null_ _null_ _null_ _null_ _null_	pg_get_ruledef_ext _null_ _null_ _null_ ));
 DESCR("source text of a rule with pretty-print option");
-DATA(insert OID = 2505 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 2 0 25 "25 16" _null_ _null_ _null_ _null_ _null_	pg_get_viewdef_name_ext _null_ _null_ _null_ ));
+DATA(insert OID = 2505 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s r 2 0 25 "25 16" _null_ _null_ _null_ _null_ _null_	pg_get_viewdef_name_ext _null_ _null_ _null_ ));
 DESCR("select statement of a view with pretty-print option");
-DATA(insert OID = 2506 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 2 0 25 "26 16" _null_ _null_ _null_ _null_ _null_	pg_get_viewdef_ext _null_ _null_ _null_ ));
+DATA(insert OID = 2506 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s r 2 0 25 "26 16" _null_ _null_ _null_ _null_ _null_	pg_get_viewdef_ext _null_ _null_ _null_ ));
 DESCR("select statement of a view with pretty-print option");
-DATA(insert OID = 3159 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_	pg_get_viewdef_wrap _null_ _null_ _null_ ));
+DATA(insert OID = 3159 (  pg_get_viewdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s r 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_	pg_get_viewdef_wrap _null_ _null_ _null_ ));
 DESCR("select statement of a view with pretty-printing and specified line wrapping");
 DATA(insert OID = 2507 (  pg_get_indexdef	   PGNSP PGUID 12 1 0 0 0 f f f f t f s s 3 0 25 "26 23 16" _null_ _null_ _null_ _null_ _null_	pg_get_indexdef_ext _null_ _null_ _null_ ));
 DESCR("index description (full create statement or single expression) with pretty-print option");
@@ -4062,7 +4062,7 @@ DESCR("trigger description with pretty-print option");
 /* asynchronous notifications */
 DATA(insert OID = 3035 (  pg_listening_channels PGNSP PGUID 12 1 10 0 0 f f f f t t s r 0 0 25 "" _null_ _null_ _null_ _null_ _null_ pg_listening_channels _null_ _null_ _null_ ));
 DESCR("get the channels that the current backend listens to");
-DATA(insert OID = 3036 (  pg_notify				PGNSP PGUID 12 1 0 0 0 f f f f f f v s 2 0 2278 "25 25" _null_ _null_ _null_ _null_ _null_ pg_notify _null_ _null_ _null_ ));
+DATA(insert OID = 3036 (  pg_notify				PGNSP PGUID 12 1 0 0 0 f f f f f f v r 2 0 2278 "25 25" _null_ _null_ _null_ _null_ _null_ pg_notify _null_ _null_ _null_ ));
 DESCR("send a notification event");
 DATA(insert OID = 3296 (  pg_notification_queue_usage	PGNSP PGUID 12 1 0 0 0 f f f f t f v s 0 0 701 "" _null_ _null_ _null_ _null_ _null_ pg_notification_queue_usage _null_ _null_ _null_ ));
 DESCR("get the fraction of the asynchronous notification queue currently in use");
@@ -4327,35 +4327,35 @@ DESCR("concatenate XML values");
 DATA(insert OID = 2922 (  text			   PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 25 "142" _null_ _null_ _null_ _null_ _null_ xmltotext _null_ _null_ _null_ ));
 DESCR("serialize an XML value to a character string");
 
-DATA(insert OID = 2923 (  table_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "2205 16 16 25" _null_ _null_ "{tbl,nulls,tableforest,targetns}" _null_ _null_ table_to_xml _null_ _null_ _null_ ));
+DATA(insert OID = 2923 (  table_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "2205 16 16 25" _null_ _null_ "{tbl,nulls,tableforest,targetns}" _null_ _null_ table_to_xml _null_ _null_ _null_ ));
 DESCR("map table contents to XML");
-DATA(insert OID = 2924 (  query_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "25 16 16 25" _null_ _null_ "{query,nulls,tableforest,targetns}" _null_ _null_ query_to_xml _null_ _null_ _null_ ));
+DATA(insert OID = 2924 (  query_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s u 4 0 142 "25 16 16 25" _null_ _null_ "{query,nulls,tableforest,targetns}" _null_ _null_ query_to_xml _null_ _null_ _null_ ));
 DESCR("map query result to XML");
-DATA(insert OID = 2925 (  cursor_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 5 0 142 "1790 23 16 16 25" _null_ _null_ "{cursor,count,nulls,tableforest,targetns}" _null_ _null_ cursor_to_xml _null_ _null_ _null_ ));
+DATA(insert OID = 2925 (  cursor_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 5 0 142 "1790 23 16 16 25" _null_ _null_ "{cursor,count,nulls,tableforest,targetns}" _null_ _null_ cursor_to_xml _null_ _null_ _null_ ));
 DESCR("map rows from cursor to XML");
-DATA(insert OID = 2926 (  table_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "2205 16 16 25" _null_ _null_ "{tbl,nulls,tableforest,targetns}" _null_ _null_ table_to_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2926 (  table_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "2205 16 16 25" _null_ _null_ "{tbl,nulls,tableforest,targetns}" _null_ _null_ table_to_xmlschema _null_ _null_ _null_ ));
 DESCR("map table structure to XML Schema");
-DATA(insert OID = 2927 (  query_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "25 16 16 25" _null_ _null_ "{query,nulls,tableforest,targetns}" _null_ _null_ query_to_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2927 (  query_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s u 4 0 142 "25 16 16 25" _null_ _null_ "{query,nulls,tableforest,targetns}" _null_ _null_ query_to_xmlschema _null_ _null_ _null_ ));
 DESCR("map query result structure to XML Schema");
-DATA(insert OID = 2928 (  cursor_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "1790 16 16 25" _null_ _null_ "{cursor,nulls,tableforest,targetns}" _null_ _null_ cursor_to_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2928 (  cursor_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "1790 16 16 25" _null_ _null_ "{cursor,nulls,tableforest,targetns}" _null_ _null_ cursor_to_xmlschema _null_ _null_ _null_ ));
 DESCR("map cursor structure to XML Schema");
-DATA(insert OID = 2929 (  table_to_xml_and_xmlschema  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "2205 16 16 25" _null_ _null_ "{tbl,nulls,tableforest,targetns}" _null_ _null_ table_to_xml_and_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2929 (  table_to_xml_and_xmlschema  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "2205 16 16 25" _null_ _null_ "{tbl,nulls,tableforest,targetns}" _null_ _null_ table_to_xml_and_xmlschema _null_ _null_ _null_ ));
 DESCR("map table contents and structure to XML and XML Schema");
-DATA(insert OID = 2930 (  query_to_xml_and_xmlschema  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "25 16 16 25" _null_ _null_ "{query,nulls,tableforest,targetns}" _null_ _null_ query_to_xml_and_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2930 (  query_to_xml_and_xmlschema  PGNSP PGUID 12 100 0 0 0 f f f f t f s u 4 0 142 "25 16 16 25" _null_ _null_ "{query,nulls,tableforest,targetns}" _null_ _null_ query_to_xml_and_xmlschema _null_ _null_ _null_ ));
 DESCR("map query result and structure to XML and XML Schema");
 
-DATA(insert OID = 2933 (  schema_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "19 16 16 25" _null_ _null_ "{schema,nulls,tableforest,targetns}" _null_ _null_ schema_to_xml _null_ _null_ _null_ ));
+DATA(insert OID = 2933 (  schema_to_xml				  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "19 16 16 25" _null_ _null_ "{schema,nulls,tableforest,targetns}" _null_ _null_ schema_to_xml _null_ _null_ _null_ ));
 DESCR("map schema contents to XML");
-DATA(insert OID = 2934 (  schema_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "19 16 16 25" _null_ _null_ "{schema,nulls,tableforest,targetns}" _null_ _null_ schema_to_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2934 (  schema_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "19 16 16 25" _null_ _null_ "{schema,nulls,tableforest,targetns}" _null_ _null_ schema_to_xmlschema _null_ _null_ _null_ ));
 DESCR("map schema structure to XML Schema");
-DATA(insert OID = 2935 (  schema_to_xml_and_xmlschema PGNSP PGUID 12 100 0 0 0 f f f f t f s s 4 0 142 "19 16 16 25" _null_ _null_ "{schema,nulls,tableforest,targetns}" _null_ _null_ schema_to_xml_and_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2935 (  schema_to_xml_and_xmlschema PGNSP PGUID 12 100 0 0 0 f f f f t f s r 4 0 142 "19 16 16 25" _null_ _null_ "{schema,nulls,tableforest,targetns}" _null_ _null_ schema_to_xml_and_xmlschema _null_ _null_ _null_ ));
 DESCR("map schema contents and structure to XML and XML Schema");
 
-DATA(insert OID = 2936 (  database_to_xml			  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 3 0 142 "16 16 25" _null_ _null_ "{nulls,tableforest,targetns}" _null_ _null_ database_to_xml _null_ _null_ _null_ ));
+DATA(insert OID = 2936 (  database_to_xml			  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 3 0 142 "16 16 25" _null_ _null_ "{nulls,tableforest,targetns}" _null_ _null_ database_to_xml _null_ _null_ _null_ ));
 DESCR("map database contents to XML");
-DATA(insert OID = 2937 (  database_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s s 3 0 142 "16 16 25" _null_ _null_ "{nulls,tableforest,targetns}" _null_ _null_ database_to_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2937 (  database_to_xmlschema		  PGNSP PGUID 12 100 0 0 0 f f f f t f s r 3 0 142 "16 16 25" _null_ _null_ "{nulls,tableforest,targetns}" _null_ _null_ database_to_xmlschema _null_ _null_ _null_ ));
 DESCR("map database structure to XML Schema");
-DATA(insert OID = 2938 (  database_to_xml_and_xmlschema PGNSP PGUID 12 100 0 0 0 f f f f t f s s 3 0 142 "16 16 25" _null_ _null_ "{nulls,tableforest,targetns}" _null_ _null_ database_to_xml_and_xmlschema _null_ _null_ _null_ ));
+DATA(insert OID = 2938 (  database_to_xml_and_xmlschema PGNSP PGUID 12 100 0 0 0 f f f f t f s r 3 0 142 "16 16 25" _null_ _null_ "{nulls,tableforest,targetns}" _null_ _null_ database_to_xml_and_xmlschema _null_ _null_ _null_ ));
 DESCR("map database contents and structure to XML and XML Schema");
 
 DATA(insert OID = 2931 (  xpath		 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 3 0 143 "25 142 1009" _null_ _null_ _null_ _null_ _null_ xpath _null_ _null_ _null_ ));
-- 
2.3.8 (Apple Git-58)

0012-Rewrite-interaction-of-parallel-mode-with-parallel-e.patchapplication/x-patch; name=0012-Rewrite-interaction-of-parallel-mode-with-parallel-e.patchDownload
From ed37d06a5223e018d7b0f5b35231c5c17dd6126e Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Wed, 7 Oct 2015 18:16:26 -0400
Subject: [PATCH 12/14] Rewrite interaction of parallel mode with parallel
 executor support.

In the previous coding, before returning from ExecutorRun, we'd shut
down all parallel workers.  This was dead wrong if ExecutorRun was
called with a non-zero tuple count; it had the effect of truncating
the query output.  To fix, give ExecutePlan control over whether to
enter parallel mode, and have it refuse to do so if the tuple count
is non-zero.  Rewrite the Gather logic so that it can cope with being
called outside parallel mode.

Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b is largely to blame
for this problem, though this patch modifies some subsequently-committed
code which relied on the guarantees it purported to make.
---
 src/backend/executor/execMain.c     |  37 +++++++-----
 src/backend/executor/execParallel.c |  17 ++++++
 src/backend/executor/nodeGather.c   | 108 +++++++++++++++++-------------------
 src/include/executor/execParallel.h |   1 +
 src/include/nodes/execnodes.h       |   2 +-
 5 files changed, 95 insertions(+), 70 deletions(-)

diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 37b7bbd..a55022e 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -76,6 +76,7 @@ static void CheckValidRowMarkRel(Relation rel, RowMarkType markType);
 static void ExecPostprocessPlan(EState *estate);
 static void ExecEndPlan(PlanState *planstate, EState *estate);
 static void ExecutePlan(EState *estate, PlanState *planstate,
+			bool use_parallel_mode,
 			CmdType operation,
 			bool sendTuples,
 			long numberTuples,
@@ -243,11 +244,6 @@ standard_ExecutorStart(QueryDesc *queryDesc, int eflags)
 	if (!(eflags & (EXEC_FLAG_SKIP_TRIGGERS | EXEC_FLAG_EXPLAIN_ONLY)))
 		AfterTriggerBeginQuery();
 
-	/* Enter parallel mode, if required by the query. */
-	if (queryDesc->plannedstmt->parallelModeNeeded &&
-		!(eflags & EXEC_FLAG_EXPLAIN_ONLY))
-		EnterParallelMode();
-
 	MemoryContextSwitchTo(oldcontext);
 }
 
@@ -341,15 +337,13 @@ standard_ExecutorRun(QueryDesc *queryDesc,
 	if (!ScanDirectionIsNoMovement(direction))
 		ExecutePlan(estate,
 					queryDesc->planstate,
+					queryDesc->plannedstmt->parallelModeNeeded,
 					operation,
 					sendTuples,
 					count,
 					direction,
 					dest);
 
-	/* Allow nodes to release or shut down resources. */
-	(void) ExecShutdownNode(queryDesc->planstate);
-
 	/*
 	 * shutdown tuple receiver, if we started it
 	 */
@@ -482,11 +476,6 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
 	 */
 	MemoryContextSwitchTo(oldcontext);
 
-	/* Exit parallel mode, if it was required by the query. */
-	if (queryDesc->plannedstmt->parallelModeNeeded &&
-		!(estate->es_top_eflags & EXEC_FLAG_EXPLAIN_ONLY))
-		ExitParallelMode();
-
 	/*
 	 * Release EState and per-query memory context.  This should release
 	 * everything the executor has allocated.
@@ -1529,6 +1518,7 @@ ExecEndPlan(PlanState *planstate, EState *estate)
 static void
 ExecutePlan(EState *estate,
 			PlanState *planstate,
+			bool use_parallel_mode,
 			CmdType operation,
 			bool sendTuples,
 			long numberTuples,
@@ -1549,6 +1539,20 @@ ExecutePlan(EState *estate,
 	estate->es_direction = direction;
 
 	/*
+	 * If a tuple count was supplied, we must force the plan to run without
+	 * parallelism, because we might exit early.
+	 */
+	if (numberTuples != 0)
+		use_parallel_mode = false;
+
+	/*
+	 * If a tuple count was supplied, we must force the plan to run without
+	 * parallelism, because we might exit early.
+	 */
+	if (use_parallel_mode)
+		EnterParallelMode();
+
+	/*
 	 * Loop until we've processed the proper number of tuples from the plan.
 	 */
 	for (;;)
@@ -1566,7 +1570,11 @@ ExecutePlan(EState *estate,
 		 * process so we just end the loop...
 		 */
 		if (TupIsNull(slot))
+		{
+			/* Allow nodes to release or shut down resources. */
+			(void) ExecShutdownNode(planstate);
 			break;
+		}
 
 		/*
 		 * If we have a junk filter, then project a new tuple with the junk
@@ -1603,6 +1611,9 @@ ExecutePlan(EState *estate,
 		if (numberTuples && numberTuples == current_tuple_count)
 			break;
 	}
+
+	if (use_parallel_mode)
+		ExitParallelMode();
 }
 
 
diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c
index e6930c1..3bb8206 100644
--- a/src/backend/executor/execParallel.c
+++ b/src/backend/executor/execParallel.c
@@ -443,6 +443,23 @@ ExecParallelFinish(ParallelExecutorInfo *pei)
 }
 
 /*
+ * Clean up whatever ParallelExecutreInfo resources still exist after
+ * ExecParallelFinish.  We separate these routines because someone might
+ * want to examine the contents of the DSM after ExecParallelFinish and
+ * before calling this routine.
+ */
+void
+ExecParallelCleanup(ParallelExecutorInfo *pei)
+{
+	if (pei->pcxt != NULL)
+	{
+		DestroyParallelContext(pei->pcxt);
+		pei->pcxt = NULL;
+	}
+	pfree(pei);
+}
+
+/*
  * Create a DestReceiver to write tuples we produce to the shm_mq designated
  * for that purpose.
  */
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index c689a4d..7e2272f 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -16,6 +16,7 @@
 #include "postgres.h"
 
 #include "access/relscan.h"
+#include "access/xact.h"
 #include "executor/execdebug.h"
 #include "executor/execParallel.h"
 #include "executor/nodeGather.h"
@@ -45,7 +46,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
 	gatherstate = makeNode(GatherState);
 	gatherstate->ps.plan = (Plan *) node;
 	gatherstate->ps.state = estate;
-	gatherstate->need_to_scan_workers = false;
 	gatherstate->need_to_scan_locally = !node->single_copy;
 
 	/*
@@ -106,52 +106,57 @@ ExecGather(GatherState *node)
 	 * needs to allocate large dynamic segement, so it is better to do if it
 	 * is really needed.
 	 */
-	if (!node->pei)
+	if (!node->initialized)
 	{
 		EState	   *estate = node->ps.state;
-
-		/* Initialize the workers required to execute Gather node. */
-		node->pei = ExecInitParallelPlan(node->ps.lefttree,
-										 estate,
-								  ((Gather *) (node->ps.plan))->num_workers);
+		Gather	   *gather = (Gather *) node->ps.plan;
 
 		/*
-		 * Register backend workers. If the required number of workers are not
-		 * available then we perform the scan with available workers and if
-		 * there are no more workers available, then the Gather node will just
-		 * scan locally.
+		 * Sometimes we might have to run without parallelism; but if
+		 * parallel mode is active then we can try to fire up some workers.
 		 */
-		LaunchParallelWorkers(node->pei->pcxt);
-
-		node->funnel = CreateTupleQueueFunnel();
-
-		for (i = 0; i < node->pei->pcxt->nworkers; ++i)
+		if (gather->num_workers > 0 && IsInParallelMode())
 		{
-			if (node->pei->pcxt->worker[i].bgwhandle)
+			bool	got_any_worker = false;
+
+			/* Initialize the workers required to execute Gather node. */
+			node->pei = ExecInitParallelPlan(node->ps.lefttree,
+											 estate,
+											 gather->num_workers);
+
+			/*
+			 * Register backend workers. We might not get as many as we
+			 * requested, or indeed any at all.
+			 */
+			LaunchParallelWorkers(node->pei->pcxt);
+
+			/* Set up a tuple queue to collect the results. */
+			node->funnel = CreateTupleQueueFunnel();
+			for (i = 0; i < node->pei->pcxt->nworkers; ++i)
 			{
-				shm_mq_set_handle(node->pei->tqueue[i],
-								  node->pei->pcxt->worker[i].bgwhandle);
-				RegisterTupleQueueOnFunnel(node->funnel, node->pei->tqueue[i]);
-				node->need_to_scan_workers = true;
+				if (node->pei->pcxt->worker[i].bgwhandle)
+				{
+					shm_mq_set_handle(node->pei->tqueue[i],
+									  node->pei->pcxt->worker[i].bgwhandle);
+					RegisterTupleQueueOnFunnel(node->funnel,
+											   node->pei->tqueue[i]);
+					got_any_worker = true;
+				}
 			}
+
+			/* No workers?  Then never mind. */
+			if (!got_any_worker)
+				ExecShutdownGather(node);
 		}
 
-		/* If no workers are available, we must always scan locally. */
-		if (!node->need_to_scan_workers)
-			node->need_to_scan_locally = true;
+		/* Run plan locally if no workers or not single-copy. */
+		node->need_to_scan_locally = (node->funnel == NULL)
+			|| !gather->single_copy;
+		node->initialized = true;
 	}
 
 	slot = gather_getnext(node);
 
-	if (TupIsNull(slot))
-	{
-		/*
-		 * Destroy the parallel context once we complete fetching all the
-		 * tuples.  Otherwise, the DSM and workers will stick around for the
-		 * lifetime of the entire statement.
-		 */
-		ExecShutdownGather(node);
-	}
 	return slot;
 }
 
@@ -194,10 +199,9 @@ gather_getnext(GatherState *gatherstate)
 	 */
 	slot = gatherstate->ps.ps_ProjInfo->pi_slot;
 
-	while (gatherstate->need_to_scan_workers ||
-		   gatherstate->need_to_scan_locally)
+	while (gatherstate->funnel != NULL || gatherstate->need_to_scan_locally)
 	{
-		if (gatherstate->need_to_scan_workers)
+		if (gatherstate->funnel != NULL)
 		{
 			bool		done = false;
 
@@ -206,7 +210,7 @@ gather_getnext(GatherState *gatherstate)
 									   gatherstate->need_to_scan_locally,
 									   &done);
 			if (done)
-				gatherstate->need_to_scan_workers = false;
+				ExecShutdownGather(gatherstate);
 
 			if (HeapTupleIsValid(tup))
 			{
@@ -247,30 +251,20 @@ gather_getnext(GatherState *gatherstate)
 void
 ExecShutdownGather(GatherState *node)
 {
-	Gather *gather;
-
-	if (node->pei == NULL || node->pei->pcxt == NULL)
-		return;
-
-	/*
-	 * Ensure all workers have finished before destroying the parallel context
-	 * to ensure a clean exit.
-	 */
-	if (node->funnel)
+	/* Shut down tuple queue funnel before shutting down workers. */
+	if (node->funnel != NULL)
 	{
 		DestroyTupleQueueFunnel(node->funnel);
 		node->funnel = NULL;
 	}
 
-	ExecParallelFinish(node->pei);
-
-	/* destroy parallel context. */
-	DestroyParallelContext(node->pei->pcxt);
-	node->pei->pcxt = NULL;
-
-	gather = (Gather *) node->ps.plan;
-	node->need_to_scan_locally = !gather->single_copy;
-	node->need_to_scan_workers = false;
+	/* Now shut down the workers. */
+	if (node->pei != NULL)
+	{
+		ExecParallelFinish(node->pei);
+		ExecParallelCleanup(node->pei);
+		node->pei = NULL;
+	}
 }
 
 /* ----------------------------------------------------------------
@@ -295,5 +289,7 @@ ExecReScanGather(GatherState *node)
 	 */
 	ExecShutdownGather(node);
 
+	node->initialized = false;
+
 	ExecReScan(node->ps.lefttree);
 }
diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h
index 4fc797a..505500e 100644
--- a/src/include/executor/execParallel.h
+++ b/src/include/executor/execParallel.h
@@ -32,5 +32,6 @@ typedef struct ParallelExecutorInfo
 extern ParallelExecutorInfo *ExecInitParallelPlan(PlanState *planstate,
 					 EState *estate, int nworkers);
 extern void ExecParallelFinish(ParallelExecutorInfo *pei);
+extern void ExecParallelCleanup(ParallelExecutorInfo *pei);
 
 #endif   /* EXECPARALLEL_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b6895f9..d705445 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1960,9 +1960,9 @@ typedef struct UniqueState
 typedef struct GatherState
 {
 	PlanState	ps;				/* its first field is NodeTag */
+	bool		initialized;
 	struct ParallelExecutorInfo *pei;
 	struct TupleQueueFunnel *funnel;
-	bool		need_to_scan_workers;
 	bool		need_to_scan_locally;
 } GatherState;
 
-- 
2.3.8 (Apple Git-58)

0013-Modify-tqueue-infrastructure-to-support-transient-re.patchapplication/x-patch; name=0013-Modify-tqueue-infrastructure-to-support-transient-re.patchDownload
From 2ee78b44a088b8f9e7b4fa0f1d05a7c89e9f169e Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Wed, 7 Oct 2015 12:43:22 -0400
Subject: [PATCH 13/14] Modify tqueue infrastructure to support transient
 record types.

Commit 4a4e6893aa080b9094dadbe0e65f8a75fee41ac6, which introduced this
mechanism, failed to account for the fact that the RECORD pseudo-type
uses transient typmods that are only meaningful within a single
backend.  Transferring such tuples without modification between two
cooperating backends does not work.  This commit installs a system
for passing the tuple descriptors over the same shm_mq being used to
send the tuples themselves.  The two sides might not assign the same
transient typmod to any given tuple descriptor, so we must also
substitute the appropriate receiver-side typmod for the one used by
the sender.  That adds some CPU overhead, but still seems better than
being unable to pass records between cooperating parallel processes.
---
 src/backend/executor/nodeGather.c |   1 +
 src/backend/executor/tqueue.c     | 492 +++++++++++++++++++++++++++++++++++---
 src/include/executor/tqueue.h     |   4 +-
 3 files changed, 467 insertions(+), 30 deletions(-)

diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 7e2272f..bf62eee 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -207,6 +207,7 @@ gather_getnext(GatherState *gatherstate)
 
 			/* wait only if local scan is done */
 			tup = TupleQueueFunnelNext(gatherstate->funnel,
+									   slot->tts_tupleDescriptor,
 									   gatherstate->need_to_scan_locally,
 									   &done);
 			if (done)
diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c
index 67143d3..53b69e0 100644
--- a/src/backend/executor/tqueue.c
+++ b/src/backend/executor/tqueue.c
@@ -21,23 +21,55 @@
 #include "postgres.h"
 
 #include "access/htup_details.h"
+#include "catalog/pg_type.h"
 #include "executor/tqueue.h"
+#include "funcapi.h"
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
+#include "utils/array.h"
+#include "utils/memutils.h"
+#include "utils/typcache.h"
 
 typedef struct
 {
 	DestReceiver pub;
 	shm_mq_handle *handle;
+	MemoryContext	tmpcontext;
+	HTAB	   *recordhtab;
+	char		mode;
 }	TQueueDestReceiver;
 
+typedef struct RecordTypemodMap
+{
+	int			remotetypmod;
+	int			localtypmod;
+} RecordTypemodMap;
+
 struct TupleQueueFunnel
 {
 	int			nqueues;
 	int			maxqueues;
 	int			nextqueue;
 	shm_mq_handle **queue;
+	char	   *mode;
+	HTAB	   *typmodmap;
 };
 
+#define		TUPLE_QUEUE_MODE_CONTROL			'c'
+#define		TUPLE_QUEUE_MODE_DATA				'd'
+
+static void tqueueWalkRecord(TQueueDestReceiver *tqueue, Datum value);
+static void tqueueWalkRecordArray(TQueueDestReceiver *tqueue, Datum value);
+static void TupleQueueHandleControlMessage(TupleQueueFunnel *funnel,
+			Size nbytes, char *data);
+static HeapTuple TupleQueueHandleDataMessage(TupleQueueFunnel *funnel,
+							TupleDesc tupledesc, Size nbytes,
+							HeapTupleHeader data);
+static HeapTuple TupleQueueRemapTuple(TupleQueueFunnel *funnel,
+					 TupleDesc tupledesc, HeapTuple tuple);
+static Datum TupleQueueRemapRecord(TupleQueueFunnel *funnel, Datum value);
+static Datum TupleQueueRemapRecordArray(TupleQueueFunnel *funnel, Datum value);
+
 /*
  * Receive a tuple.
  */
@@ -46,12 +78,178 @@ tqueueReceiveSlot(TupleTableSlot *slot, DestReceiver *self)
 {
 	TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self;
 	HeapTuple	tuple;
+	HeapTupleHeader tup;
+	AttrNumber	i;
 
 	tuple = ExecMaterializeSlot(slot);
+	tup = tuple->t_data;
+
+	/*
+	 * If any of the columns that we're sending back are records, special
+	 * handling is required, because the tuple descriptors are stored in a
+	 * backend-local cache, and the backend receiving data from us need not
+	 * have the same cache contents we do.  We grovel through the tuple,
+	 * find all the transient record types contained therein, and send
+	 * special control messages through the queue so that the receiving
+	 * process can interpret them correctly.
+	 */
+	for (i = 0; i < slot->tts_tupleDescriptor->natts; ++i)
+	{
+		Form_pg_attribute attr = slot->tts_tupleDescriptor->attrs[i];
+		MemoryContext	oldcontext;
+
+		/* Ignore nulls and non-records. */
+		if (slot->tts_isnull[i] || (attr->atttypid != RECORDOID
+			&& attr->atttypid != RECORDARRAYOID))
+			continue;
+
+		/*
+		 * OK, we're going to need to examine this attribute.  We could
+		 * use heap_deform_tuple here, but there's a possibility that the
+		 * slot already constains the deconstructed tuple, in which case
+		 * deforming it again would be needlessly inefficient.
+		 */
+		slot_getallattrs(slot);
+
+		/* Switch to temporary memory context to avoid leaking. */
+		if (tqueue->tmpcontext == NULL)
+			tqueue->tmpcontext =
+				AllocSetContextCreate(TopTransactionContext,
+									  "tqueue temporary context",
+									  ALLOCSET_DEFAULT_MINSIZE,
+									  ALLOCSET_DEFAULT_INITSIZE,
+									  ALLOCSET_DEFAULT_MAXSIZE);
+		oldcontext = MemoryContextSwitchTo(tqueue->tmpcontext);
+		if (attr->atttypid == RECORDOID)
+			tqueueWalkRecord(tqueue, slot->tts_values[i]);
+		else
+			tqueueWalkRecordArray(tqueue, slot->tts_values[i]);
+		MemoryContextSwitchTo(oldcontext);
+
+		/* Clean up anything memory we allocated. */
+		MemoryContextReset(tqueue->tmpcontext);
+	}
+
+	/* If we entered control mode, switch back to data mode. */
+	if (tqueue->mode != TUPLE_QUEUE_MODE_DATA)
+	{
+		tqueue->mode = TUPLE_QUEUE_MODE_DATA;
+		shm_mq_send(tqueue->handle, sizeof(char), &tqueue->mode, false);
+	}
+
+	/* Send the tuple itself. */
 	shm_mq_send(tqueue->handle, tuple->t_len, tuple->t_data, false);
 }
 
 /*
+ * Walk a record and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkRecord(TQueueDestReceiver *tqueue, Datum value)
+{
+	HeapTupleHeader	tup;
+	Oid			typmod;
+	bool		found;
+	TupleDesc	tupledesc;
+	Datum	   *values;
+	bool	   *isnull;
+	HeapTupleData	tdata;
+	AttrNumber	i;
+
+	/* Extract typmod from tuple. */
+	tup = DatumGetHeapTupleHeader(value);
+	typmod = HeapTupleHeaderGetTypMod(tup);
+
+	/* Look up tuple descriptor in typecache. */
+	tupledesc = lookup_rowtype_tupdesc(RECORDOID, typmod);
+
+	/* Initialize hash table if not done yet. */
+	if (tqueue->recordhtab == NULL)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(int);
+		ctl.entrysize = sizeof(int);
+		ctl.hcxt = TopMemoryContext;
+		tqueue->recordhtab = hash_create("tqueue record hashtable",
+										 100, &ctl, HASH_ELEM | HASH_CONTEXT);
+	}
+
+	/* Have we already seen this record type?  If not, must report it. */
+	hash_search(tqueue->recordhtab, &typmod, HASH_ENTER, &found);
+	if (!found)
+	{
+		StringInfoData	buf;
+
+		/* If message queue is in data mode, switch to control mode. */
+		if (tqueue->mode != TUPLE_QUEUE_MODE_CONTROL)
+		{
+			tqueue->mode = TUPLE_QUEUE_MODE_CONTROL;
+			shm_mq_send(tqueue->handle, sizeof(char), &tqueue->mode, false);
+		}
+
+		/* Assemble a control message. */
+		initStringInfo(&buf);
+		appendBinaryStringInfo(&buf, (char *) &typmod, sizeof(int));
+		appendBinaryStringInfo(&buf, (char *) &tupledesc->natts, sizeof(int));
+		appendBinaryStringInfo(&buf, (char *) &tupledesc->tdhasoid,
+							   sizeof(bool));
+		for (i = 0; i < tupledesc->natts; ++i)
+			appendBinaryStringInfo(&buf, (char *) tupledesc->attrs[i],
+								   sizeof(FormData_pg_attribute));
+
+		/* Send control message. */
+		shm_mq_send(tqueue->handle, buf.len, buf.data, false);
+	}
+
+	/* Deform the tuple so we can check each column within. */
+	values = palloc(tupledesc->natts * sizeof(Datum));
+	isnull = palloc(tupledesc->natts * sizeof(bool));
+	tdata.t_len = HeapTupleHeaderGetDatumLength(tup);
+	ItemPointerSetInvalid(&(tdata.t_self));
+	tdata.t_tableOid = InvalidOid;
+	tdata.t_data = tup;
+	heap_deform_tuple(&tdata, tupledesc, values, isnull);
+
+	/* Recursively check each non-NULL attribute. */
+	for (i = 0; i < tupledesc->natts; ++i)
+	{
+		Form_pg_attribute attr = tupledesc->attrs[i];
+		if (isnull[i])
+			continue;
+		if (attr->atttypid == RECORDOID)
+			tqueueWalkRecord(tqueue, values[i]);
+		if (attr->atttypid == RECORDARRAYOID)
+			tqueueWalkRecordArray(tqueue, values[i]);
+	}
+
+	/* Release reference count acquired by lookup_rowtype_tupdesc. */
+	DecrTupleDescRefCount(tupledesc);
+}
+
+/*
+ * Walk a record and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkRecordArray(TQueueDestReceiver *tqueue, Datum value)
+{
+	ArrayType  *arr = DatumGetArrayTypeP(value);
+	Datum	   *elem_values;
+	bool	   *elem_nulls;
+	int			num_elems;
+	int			i;
+
+	Assert(ARR_ELEMTYPE(arr) == RECORDOID);
+	deconstruct_array(arr, RECORDOID, -1, false, 'd',
+					  &elem_values, &elem_nulls, &num_elems);
+	for (i = 0; i < num_elems; ++i)
+		if (!elem_nulls[i])
+			tqueueWalkRecord(tqueue, elem_values[i]);
+}
+
+/*
  * Prepare to receive tuples from executor.
  */
 static void
@@ -77,6 +275,12 @@ tqueueShutdownReceiver(DestReceiver *self)
 static void
 tqueueDestroyReceiver(DestReceiver *self)
 {
+	TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self;
+
+	if (tqueue->tmpcontext != NULL)
+		MemoryContextDelete(tqueue->tmpcontext);
+	if (tqueue->recordhtab != NULL)
+		hash_destroy(tqueue->recordhtab);
 	pfree(self);
 }
 
@@ -96,6 +300,9 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle)
 	self->pub.rDestroy = tqueueDestroyReceiver;
 	self->pub.mydest = DestTupleQueue;
 	self->handle = handle;
+	self->tmpcontext = NULL;
+	self->recordhtab = NULL;
+	self->mode = TUPLE_QUEUE_MODE_DATA;
 
 	return (DestReceiver *) self;
 }
@@ -110,6 +317,7 @@ CreateTupleQueueFunnel(void)
 
 	funnel->maxqueues = 8;
 	funnel->queue = palloc(funnel->maxqueues * sizeof(shm_mq_handle *));
+	funnel->mode = palloc(funnel->maxqueues * sizeof(char));
 
 	return funnel;
 }
@@ -125,6 +333,9 @@ DestroyTupleQueueFunnel(TupleQueueFunnel *funnel)
 	for (i = 0; i < funnel->nqueues; i++)
 		shm_mq_detach(shm_mq_get_queue(funnel->queue[i]));
 	pfree(funnel->queue);
+	pfree(funnel->mode);
+	if (funnel->typmodmap != NULL)
+		hash_destroy(funnel->typmodmap);
 	pfree(funnel);
 }
 
@@ -134,12 +345,6 @@ DestroyTupleQueueFunnel(TupleQueueFunnel *funnel)
 void
 RegisterTupleQueueOnFunnel(TupleQueueFunnel *funnel, shm_mq_handle *handle)
 {
-	if (funnel->nqueues < funnel->maxqueues)
-	{
-		funnel->queue[funnel->nqueues++] = handle;
-		return;
-	}
-
 	if (funnel->nqueues >= funnel->maxqueues)
 	{
 		int			newsize = funnel->nqueues * 2;
@@ -148,10 +353,12 @@ RegisterTupleQueueOnFunnel(TupleQueueFunnel *funnel, shm_mq_handle *handle)
 
 		funnel->queue = repalloc(funnel->queue,
 								 newsize * sizeof(shm_mq_handle *));
+		funnel->mode = repalloc(funnel->mode, newsize * sizeof(bool));
 		funnel->maxqueues = newsize;
 	}
 
-	funnel->queue[funnel->nqueues++] = handle;
+	funnel->queue[funnel->nqueues] = handle;
+	funnel->mode[funnel->nqueues++] = TUPLE_QUEUE_MODE_DATA;
 }
 
 /*
@@ -172,7 +379,8 @@ RegisterTupleQueueOnFunnel(TupleQueueFunnel *funnel, shm_mq_handle *handle)
  * any other case.
  */
 HeapTuple
-TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
+TupleQueueFunnelNext(TupleQueueFunnel *funnel, TupleDesc tupledesc,
+					 bool nowait, bool *done)
 {
 	int			waitpos = funnel->nextqueue;
 
@@ -190,6 +398,7 @@ TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
 	for (;;)
 	{
 		shm_mq_handle *mqh = funnel->queue[funnel->nextqueue];
+		char	   *modep = &funnel->mode[funnel->nextqueue];
 		shm_mq_result result;
 		Size		nbytes;
 		void	   *data;
@@ -198,15 +407,10 @@ TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
 		result = shm_mq_receive(mqh, &nbytes, &data, true);
 
 		/*
-		 * Normally, we advance funnel->nextqueue to the next queue at this
-		 * point, but if we're pointing to a queue that we've just discovered
-		 * is detached, then forget that queue and leave the pointer where it
-		 * is until the number of remaining queues fall below that pointer and
-		 * at that point make the pointer point to the first queue.
+		 * If this queue has been detached, forget about it and shift the
+		 * remmaining queues downward in the array.
 		 */
-		if (result != SHM_MQ_DETACHED)
-			funnel->nextqueue = (funnel->nextqueue + 1) % funnel->nqueues;
-		else
+		if (result == SHM_MQ_DETACHED)
 		{
 			--funnel->nqueues;
 			if (funnel->nqueues == 0)
@@ -230,21 +434,32 @@ TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
 			continue;
 		}
 
+		/* Advance nextqueue pointer to next queue in round-robin fashion. */
+		funnel->nextqueue = (funnel->nextqueue + 1) % funnel->nqueues;
+
 		/* If we got a message, return it. */
 		if (result == SHM_MQ_SUCCESS)
 		{
-			HeapTupleData htup;
-
-			/*
-			 * The tuple data we just read from the queue is only valid until
-			 * we again attempt to read from it.  Copy the tuple into a single
-			 * palloc'd chunk as callers will expect.
-			 */
-			ItemPointerSetInvalid(&htup.t_self);
-			htup.t_tableOid = InvalidOid;
-			htup.t_len = nbytes;
-			htup.t_data = data;
-			return heap_copytuple(&htup);
+			if (nbytes == 1)
+			{
+				/* Mode switch message. */
+				*modep = ((char *) data)[0];
+				continue;
+			}
+			else if (*modep == TUPLE_QUEUE_MODE_DATA)
+			{
+				/* Tuple data. */
+				return TupleQueueHandleDataMessage(funnel, tupledesc,
+												   nbytes, data);
+			}
+			else if (*modep == TUPLE_QUEUE_MODE_CONTROL)
+			{
+				/* Control message, describing a transient record type. */
+				TupleQueueHandleControlMessage(funnel, nbytes, data);
+				continue;
+			}
+			else
+				elog(ERROR, "invalid mode: %d", (int) *modep);
 		}
 
 		/*
@@ -262,3 +477,224 @@ TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
 		}
 	}
 }
+
+/*
+ * Handle a data message - that is, a tuple - from the tuple queue funnel.
+ */
+static HeapTuple
+TupleQueueHandleDataMessage(TupleQueueFunnel *funnel, TupleDesc tupledesc,
+							Size nbytes, HeapTupleHeader data)
+{
+	HeapTupleData htup;
+
+	ItemPointerSetInvalid(&htup.t_self);
+	htup.t_tableOid = InvalidOid;
+	htup.t_len = nbytes;
+	htup.t_data = data;
+
+	/* If necessary, remap record typmods. */
+	if (funnel->typmodmap != NULL)
+	{
+		HeapTuple	newtuple;
+
+		newtuple = TupleQueueRemapTuple(funnel, tupledesc, &htup);
+		if (newtuple != NULL)
+			return newtuple;
+	}
+
+	/*
+	 * Otherwise, just copy the tuple into a single palloc'd chunk, as
+	 * callers will expect.
+	 */
+	return heap_copytuple(&htup);
+}
+
+/*
+ * Remap tuple typmods per control information received from remote side.
+ */
+static HeapTuple
+TupleQueueRemapTuple(TupleQueueFunnel *funnel, TupleDesc tupledesc,
+					 HeapTuple tuple)
+{
+	Datum	   *values;
+	bool	   *isnull;
+	bool		dirty = false;
+	int			i;
+
+	/* Deform tuple so we can remap record typmods for individual attrs. */
+	values = palloc(tupledesc->natts * sizeof(Datum));
+	isnull = palloc(tupledesc->natts * sizeof(bool));
+	heap_deform_tuple(tuple, tupledesc, values, isnull);
+
+	/* Recursively check each non-NULL attribute. */
+	for (i = 0; i < tupledesc->natts; ++i)
+	{
+		Form_pg_attribute attr = tupledesc->attrs[i];
+
+		if (isnull[i])
+			continue;
+
+		if (attr->atttypid == RECORDOID)
+		{
+			values[i] = TupleQueueRemapRecord(funnel, values[i]);
+			dirty = true;
+		}
+
+
+		if (attr->atttypid == RECORDARRAYOID)
+		{
+			values[i] = TupleQueueRemapRecordArray(funnel, values[i]);
+			dirty = true;
+		}
+	}
+
+	/* If we didn't need to change anything, just return NULL. */
+	if (!dirty)
+		return NULL;
+
+	/* Reform the modified tuple. */
+	return heap_form_tuple(tupledesc, values, isnull);
+}
+
+static Datum
+TupleQueueRemapRecord(TupleQueueFunnel *funnel, Datum value)
+{
+	HeapTupleHeader	tup;
+	int				remotetypmod;
+	RecordTypemodMap *mapent;
+	TupleDesc		atupledesc;
+	HeapTupleData	htup;
+	HeapTuple		atup;
+
+	tup = DatumGetHeapTupleHeader(value);
+
+	/* Map remote typmod to local typmod and get tupledesc. */
+	remotetypmod = HeapTupleHeaderGetTypMod(tup);
+	Assert(funnel->typmodmap != NULL);
+	mapent = hash_search(funnel->typmodmap, &remotetypmod,
+						 HASH_FIND, NULL);
+	if (mapent == NULL)
+		elog(ERROR, "found unrecognized remote typmod %d",
+			 mapent->remotetypmod);
+	atupledesc = lookup_rowtype_tupdesc(RECORDOID, mapent->localtypmod);
+
+	/* Recursively process contents of record. */
+	ItemPointerSetInvalid(&htup.t_self);
+	htup.t_tableOid = InvalidOid;
+	htup.t_len = HeapTupleHeaderGetDatumLength(tup);
+	htup.t_data = tup;
+	atup = TupleQueueRemapTuple(funnel, atupledesc, &htup);
+
+	/* Release reference count acquired by lookup_rowtype_tupdesc. */
+	DecrTupleDescRefCount(atupledesc);
+
+	/*
+	 * Even if none of the attributes inside this tuple are records that
+	 * require typmod remapping, we still need to change the typmod on
+	 * the record itself.  However, we can do that by copying the tuple
+	 * rather than reforming it.
+	 */
+	if (atup == NULL)
+	{
+		atup = heap_copytuple(&htup);
+		HeapTupleHeaderSetTypMod(atup->t_data, mapent->localtypmod);
+	}
+
+	return HeapTupleHeaderGetDatum(atup->t_data);
+}
+
+static Datum
+TupleQueueRemapRecordArray(TupleQueueFunnel *funnel, Datum value)
+{
+	ArrayType  *arr = DatumGetArrayTypeP(value);
+	Datum	   *elem_values;
+	bool	   *elem_nulls;
+	int			num_elems;
+	int			i;
+
+	Assert(ARR_ELEMTYPE(arr) == RECORDOID);
+	deconstruct_array(arr, RECORDOID, -1, false, 'd',
+					  &elem_values, &elem_nulls, &num_elems);
+	for (i = 0; i < num_elems; ++i)
+		if (!elem_nulls[i])
+			elem_values[i] = TupleQueueRemapRecord(funnel, elem_values[i]);
+	arr = construct_md_array(elem_values, elem_nulls,
+							 ARR_NDIM(arr), ARR_DIMS(arr), ARR_LBOUND(arr),
+							 RECORDOID,
+							 -1, false, 'd');
+	return PointerGetDatum(arr);
+}
+
+/*
+ * Handle a control message from the tuple queue funnel.
+ *
+ * Control messages are sent when the remote side is sending tuples that
+ * contain transient record types.  We need to arrange to bless those
+ * record types locally and translate between remote and local typmods.
+ */
+static void
+TupleQueueHandleControlMessage(TupleQueueFunnel *funnel, Size nbytes,
+							   char *data)
+{
+	int		natts;
+	int		remotetypmod;
+	bool	hasoid;
+	char   *buf = data;
+	int		rc = 0;
+	int		i;
+	Form_pg_attribute *attrs;
+	MemoryContext	oldcontext;
+	TupleDesc	tupledesc;
+	RecordTypemodMap *mapent;
+	bool	found;
+
+	/* Extract remote typmod. */
+	memcpy(&remotetypmod, &buf[rc], sizeof(int));
+	rc += sizeof(int);
+
+	/* Extract attribute count. */
+	memcpy(&natts, &buf[rc], sizeof(int));
+	rc += sizeof(int);
+
+	/* Extract hasoid flag. */
+	memcpy(&hasoid, &buf[rc], sizeof(bool));
+	rc += sizeof(bool);
+
+	/* Extract attribute details. */
+	oldcontext = MemoryContextSwitchTo(CurTransactionContext);
+	attrs = palloc(natts * sizeof(Form_pg_attribute));
+	for (i = 0; i < natts; ++i)
+	{
+		attrs[i] = palloc(sizeof(FormData_pg_attribute));
+		memcpy(attrs[i], &buf[rc], sizeof(FormData_pg_attribute));
+		rc += sizeof(FormData_pg_attribute);
+	}
+	MemoryContextSwitchTo(oldcontext);
+
+	/* We should have read the whole message. */
+	Assert(rc == nbytes);
+
+	/* Construct TupleDesc. */
+	tupledesc = CreateTupleDesc(natts, hasoid, attrs);
+	tupledesc = BlessTupleDesc(tupledesc);
+
+	/* Create map if it doesn't exist already. */
+	if (funnel->typmodmap == NULL)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(int);
+		ctl.entrysize = sizeof(RecordTypemodMap);
+		ctl.hcxt = CurTransactionContext;
+		funnel->typmodmap = hash_create("typmodmap hashtable",
+							 100, &ctl, HASH_ELEM | HASH_CONTEXT);
+	}
+
+	/* Create map entry. */
+	mapent = hash_search(funnel->typmodmap, &remotetypmod, HASH_ENTER,
+						 &found);
+	if (found)
+		elog(ERROR, "duplicate message for typmod %d",
+			 remotetypmod);
+	mapent->localtypmod = tupledesc->tdtypmod;
+}
diff --git a/src/include/executor/tqueue.h b/src/include/executor/tqueue.h
index 6f8eb73..59f35c7 100644
--- a/src/include/executor/tqueue.h
+++ b/src/include/executor/tqueue.h
@@ -25,7 +25,7 @@ typedef struct TupleQueueFunnel TupleQueueFunnel;
 extern TupleQueueFunnel *CreateTupleQueueFunnel(void);
 extern void DestroyTupleQueueFunnel(TupleQueueFunnel *funnel);
 extern void RegisterTupleQueueOnFunnel(TupleQueueFunnel *, shm_mq_handle *);
-extern HeapTuple TupleQueueFunnelNext(TupleQueueFunnel *, bool nowait,
-					 bool *done);
+extern HeapTuple TupleQueueFunnelNext(TupleQueueFunnel *, TupleDesc tupledesc,
+					 bool nowait, bool *done);
 
 #endif   /* TQUEUE_H */
-- 
2.3.8 (Apple Git-58)

0014-Fix-problems-with-ParamListInfo-serialization-mechan.patchapplication/x-patch; name=0014-Fix-problems-with-ParamListInfo-serialization-mechan.patchDownload
From ad2fbc6fbf143db4f8b2231f03100b60029a1275 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Mon, 12 Oct 2015 11:46:40 -0400
Subject: [PATCH 14/14] Fix problems with ParamListInfo serialization
 mechanism.

Commit d1b7c1ffe72e86932b5395f29e006c3f503bc53d introduced a mechanism
for serializing a ParamListInfo structure to be passed to a parallel
worker.  However, this mechanism failed to handle external expanded
values, as pointed out by Noah Misch.  Moreover, plpgsql_param_fetch
requires adjustment because the serialization mechanism needs it to skip
evaluating unused parameters just as we would do when it is called from
copyParamList, but params == estate->paramLI in that case.  To fix, do
the relevant bms_is_member test unconditionally.
---
 src/backend/utils/adt/datum.c | 16 ++++++++++++++++
 src/pl/plpgsql/src/pl_exec.c  | 26 +++++++++++---------------
 2 files changed, 27 insertions(+), 15 deletions(-)

diff --git a/src/backend/utils/adt/datum.c b/src/backend/utils/adt/datum.c
index 3d9e354..0d61950 100644
--- a/src/backend/utils/adt/datum.c
+++ b/src/backend/utils/adt/datum.c
@@ -264,6 +264,11 @@ datumEstimateSpace(Datum value, bool isnull, bool typByVal, int typLen)
 		/* no need to use add_size, can't overflow */
 		if (typByVal)
 			sz += sizeof(Datum);
+		else if (VARATT_IS_EXTERNAL_EXPANDED(value))
+		{
+			ExpandedObjectHeader *eoh = DatumGetEOHP(value);
+			sz += EOH_get_flat_size(eoh);
+		}
 		else
 			sz += datumGetSize(value, typByVal, typLen);
 	}
@@ -292,6 +297,7 @@ void
 datumSerialize(Datum value, bool isnull, bool typByVal, int typLen,
 			   char **start_address)
 {
+	ExpandedObjectHeader *eoh = NULL;
 	int		header;
 
 	/* Write header word. */
@@ -299,6 +305,11 @@ datumSerialize(Datum value, bool isnull, bool typByVal, int typLen,
 		header = -2;
 	else if (typByVal)
 		header = -1;
+	else if (VARATT_IS_EXTERNAL_EXPANDED(value))
+	{
+		eoh = DatumGetEOHP(value);
+		header = EOH_get_flat_size(eoh);
+	}
 	else
 		header = datumGetSize(value, typByVal, typLen);
 	memcpy(*start_address, &header, sizeof(int));
@@ -312,6 +323,11 @@ datumSerialize(Datum value, bool isnull, bool typByVal, int typLen,
 			memcpy(*start_address, &value, sizeof(Datum));
 			*start_address += sizeof(Datum);
 		}
+		else if (eoh)
+		{
+			EOH_flatten_into(eoh, (void *) *start_address, header);
+			*start_address += header;
+		}
 		else
 		{
 			memcpy(*start_address, DatumGetPointer(value), header);
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index c73f20b..346e8f8 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -5696,21 +5696,17 @@ plpgsql_param_fetch(ParamListInfo params, int paramid)
 	/* now we can access the target datum */
 	datum = estate->datums[dno];
 
-	/* need to behave slightly differently for shared and unshared arrays */
-	if (params != estate->paramLI)
-	{
-		/*
-		 * We're being called, presumably from copyParamList(), for cursor
-		 * parameters.  Since copyParamList() will try to materialize every
-		 * single parameter slot, it's important to do nothing when asked for
-		 * a datum that's not supposed to be used by this SQL expression.
-		 * Otherwise we risk failures in exec_eval_datum(), not to mention
-		 * possibly copying a lot more data than the cursor actually uses.
-		 */
-		if (!bms_is_member(dno, expr->paramnos))
-			return;
-	}
-	else
+	/*
+	 * Since copyParamList() and SerializeParamList() will try to materialize
+	 * every single parameter slot, it's important to do nothing when asked for
+	 * a datum that's not supposed to be used by this SQL expression.
+	 * Otherwise we risk failures in exec_eval_datum(), not to mention
+	 * possibly copying a lot more data than the cursor actually uses.
+	 */
+	if (!bms_is_member(dno, expr->paramnos))
+		return;
+
+	if (params == estate->paramLI)
 	{
 		/*
 		 * Normal evaluation cases.  We don't need to sanity-check dno, but we
-- 
2.3.8 (Apple Git-58)

#2Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#1)
Re: a raft of parallelism-related bug fixes

On Mon, Oct 12, 2015 at 1:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Attached are 14 patches. Patches #1-#4 are
essential for testing purposes but are not proposed for commit,
although some of the code they contain may eventually become part of
other patches which are proposed for commit. Patches #5-#12 are
largely boring patches fixing fairly uninteresting mistakes; I propose
to commit these on an expedited basis. Patches #13-14 are also
proposed for commit but seem to me to be more in need of review.

Hearing no objections, I've now gone and committed #5-#12.

0013-Modify-tqueue-infrastructure-to-support-transient-re.patch
attempts to address a deficiency in the tqueue.c/tqueue.h machinery I
recently introduced: backends can have ephemeral record types for
which they use backend-local typmods that may not be the same between
the leader and the worker. This patch has the worker send metadata
about the tuple descriptor for each such type, and the leader
registers the same tuple descriptor and then remaps the typmods from
the worker's typmod space to its own. This seems to work, but I'm a
little concerned that there may be cases it doesn't cover. Also,
there's room to question the overall approach. The only other
alternative that springs readily to mind is to try to arrange things
during the planning phase so that we never try to pass records between
parallel backends in this way, but that seems like it would be hard to
code (and thus likely to have bugs) and also pretty limiting.

I am still hoping someone will step up to review this.

0014-Fix-problems-with-ParamListInfo-serialization-mechan.patch, which
I just posted on the Parallel Seq Scan thread as a standalone patch,
fixes pretty much what the name of the file suggests. This actually
fixes two problems, one of which Noah spotted and commented on over on
that thread. By pure coincidence, the last 'make check' regression
failure I was still troubleshooting needed a fix for that issue plus a
fix to plpgsql_param_fetch. However, as I mentioned on the other
thread, I'm not quite sure which way to go with the change to
plpgsql_param_fetch so scrutiny of that point, in particular, would be
appreciated. See also
/messages/by-id/CA+TgmobN=wADVaUTwsH-xqvCdovkeRasuXw2k3R6vmpWig7raw@mail.gmail.com

Noah's been helping with this issue on the other thread. I'll revise
this patch along the lines discussed there and resubmit.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Simon Riggs
simon@2ndQuadrant.com
In reply to: Robert Haas (#1)
Re: a raft of parallelism-related bug fixes

On 12 October 2015 at 18:04, Robert Haas <robertmhaas@gmail.com> wrote:

My recent commit of the Gather executor node has made it relatively
simple to write code that does an end-to-end test of all of the
parallelism-relate commits which have thus far gone into the tree.

I've been wanting to help here for a while, but time remains limited for
next month or so.

From reading this my understanding is that there isn't a test suite
included with this commit?

I've tried to review the Gather node commit and I note that the commit
message contains a longer description of the functionality in that patch
than any comments in the patch as a whole. No design comments, no README,
no file header comments. For such a major feature that isn't acceptable - I
would reject a patch from others on that basis alone (and have done so). We
must keep the level of comments high if we are to encourage wider
participation in the project.

So reviewing patch 13 isn't possible without prior knowledge.

Hoping we'll be able to find some time on this at PGConf.eu; thanks for
coming over.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#4Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#3)
Re: a raft of parallelism-related bug fixes

On Sat, Oct 17, 2015 at 9:16 AM, Simon Riggs <simon@2ndquadrant.com> wrote:

From reading this my understanding is that there isn't a test suite included
with this commit?

Right. The patches on the thread contain code that can be used for
testing, but the committed code does not itself include test coverage.
I welcome thoughts on how we could perform automated testing of this
code. I think at least part of the answer is that I need to press on
toward getting the rest of Amit's parallel sequential scan patch
committed, because then this becomes a user-visible feature and I
expect that to make it much easier to find whatever bugs remain. A
big part of the difficulty in testing this up until now is that I've
been building towards, hey, we have parallel query. Until we actually
do, you need to write C code to test this, which raises the bar
considerably.

Now, that does not mean we shouldn't test this in other ways, and of
course I have, and so have Amit and other people from the community -
of late, Noah Misch and Haribabu Kommi have found several bugs through
code inspection and testing, which included some of the same ones that
I was busy finding and fixing using the test code attached to this
thread. That's one of the reasons why I wanted to press forward with
getting the fixes for those bugs committed. It's just a waste of
everybody's time if we re-finding known bugs for which fixes already
exist.

But the question of how to test this in the buildfarm is a good one,
and I don't have a complete answer. Once the rest of this goes in,
which I hope will be soon, we can EXPLAIN or EXPLAIN ANALYZE or just
straight up run parallel queries in the regression test suite and see
that they behave as expected. But I don't expect that to provide
terribly good test coverage. One idea that I think would provide
*excellent* test coverage is to take the test code included on this
thread and run it on the buildfarm. The idea of the code is to
basically run the regression test suite with every parallel-eligible
query forced to unnecessarily use parallelism. Turning that and
running 'make check' found, directly or indirectly, all of these bugs.
Doing that on the whole buildfarm would probably find more.

However, I'm pretty sure that we don't want to switch the *entire*
buildfarm to using lots of unnecessary parallelism. What we might be
able to do is have some critters that people spin up for this precise
purpose. Just like we currently have CLOBBER_CACHE_ALWAYS buildfarm
members, we could have GRATUITOUSLY_PARALLEL buildfarm members. If
Andrew is willing to add buildfarm support for that option and a few
people are willing to run critters in that mode, I will be happy -
more than happy, really - to put the test code into committable form,
guarded by a #define, and away we go.

Of course, other ideas for testing are also welcome.

I've tried to review the Gather node commit and I note that the commit
message contains a longer description of the functionality in that patch
than any comments in the patch as a whole. No design comments, no README, no
file header comments. For such a major feature that isn't acceptable - I
would reject a patch from others on that basis alone (and have done so). We
must keep the level of comments high if we are to encourage wider
participation in the project.

It's good to have your perspective on how this can be improved, and
I'm definitely willing to write more documentation. Any lack in that
area is probably due to being too close to the subject area, having
spent several years on parallelism in general, and 200+ emails on
parallel sequential scan in particular. Your point about the lack of
a good header file comment for execParallel.c is a good one, and I'll
rectify that next week.

It's worth noting, though, that the executor files in general don't
contain great gobs of comments, and the executor README even has this
vintage 2001 comment:

XXX a great deal more documentation needs to be written here...

Well, yeah. It's taken me a long time to understand how the executor
actually works, and there are parts of it - particularly related to
EvalPlanQual - that I still don't fully understand. So some of the
lack of comments in, for example, nodeGather.c is because it copies
the style of other executor nodes, like nodeSeqscan.c. It's not
exactly clear to me what more to document there. You either
understand what a rescan node is, in which case the code for each
node's rescan method tends to be fairly self-evident, or you don't -
but that clearly shouldn't be re-explained in every file. So I guess
what I'm saying is I could use some advice on what kinds things would
be most useful to document, and where to put that documentation.

Right now, the best explanation of how parallelism works is in
src/backend/access/transam/README.parallel -- but, as you rightly
point out, that doesn't cover the executor bits. Should we have SGML
documentation under "VII. Internals" that explains what's under the
hood in the same way that we have sections for "Database Physical
Storage" and "PostgreSQL Coding Conventions"? Should the stuff in the
existing README.parallel be moved there? Or I could just add some
words to src/backend/executor/README to cover the parallel executor
stuff, if that is preferred. Advice?

Also, regardless of how we document what's going on at the code level,
I think we probably should have a section *somewhere* in the main SGML
documentation that kind of explains the general concepts behind
parallel query from a user/DBA perspective. But I don't know where to
put it. Under "Server Administration"? Exactly what to explain there
needs some thought, too. I'm sort of wondering if we need two
chapters in the documentation on this, one that covers it from a
user/DBA perspective and the other of which covers it from a hacker
perspective. But then maybe the hacker stuff should just go in README
files. I'm not sure. I may have to try writing some of this and see
how it goes, but advice is definitely appreciated.

I am happy to definitively commit to writing whatever documentation
the community feels is necessary here, and I will do that certainly
before end of development for 9.6 and hopefully much sooner than that.
I will do that even if I don't get any specific feedback on what to
write and where to put it, but the more feedback I get, the better the
result will probably be. Some of the reason this hasn't been done
already is because we're still getting the infrastructure into place,
and we're fixing and adjusting things as we go along, so while the
overall picture isn't changing much, there are bits of the design that
are still in flux as we realize, oh, crap, that was a dumb idea. As
we get a clearer idea what will be in 9.6, it will get easier to
present the overall picture in a coherent way.

So reviewing patch 13 isn't possible without prior knowledge.

The basic question for patch 13 is whether ephemeral record types can
occur in executor tuples in any contexts that I haven't identified. I
know that a tuple table slot can contain have a column that is of type
record or record[], and those records can themselves contain
attributes of type record or record[], and so on as far down as you
like. I *think* that's the only case. For example, I don't believe
that a TupleTableSlot can contain a *named* record type that has an
anonymous record buried down inside of it somehow. But I'm not
positive I'm right about that.

Hoping we'll be able to find some time on this at PGConf.eu; thanks for
coming over.

Sure thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Noah Misch
noah@leadboat.com
In reply to: Robert Haas (#4)
Re: a raft of parallelism-related bug fixes

On Sat, Oct 17, 2015 at 06:17:37PM -0400, Robert Haas wrote:

One idea that I think would provide
*excellent* test coverage is to take the test code included on this
thread and run it on the buildfarm. The idea of the code is to
basically run the regression test suite with every parallel-eligible
query forced to unnecessarily use parallelism. Turning that and
running 'make check' found, directly or indirectly, all of these bugs.
Doing that on the whole buildfarm would probably find more.

However, I'm pretty sure that we don't want to switch the *entire*
buildfarm to using lots of unnecessary parallelism. What we might be
able to do is have some critters that people spin up for this precise
purpose. Just like we currently have CLOBBER_CACHE_ALWAYS buildfarm
members, we could have GRATUITOUSLY_PARALLEL buildfarm members. If
Andrew is willing to add buildfarm support for that option and a few

What, if anything, would this mode require beyond adding a #define? If
nothing, it won't require specific support in the buildfarm script.
CLOBBER_CACHE_ALWAYS has no specific support.

people are willing to run critters in that mode, I will be happy -
more than happy, really - to put the test code into committable form,
guarded by a #define, and away we go.

I would make one such animal.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Stephen Frost
sfrost@snowman.net
In reply to: Noah Misch (#5)
Re: a raft of parallelism-related bug fixes

* Noah Misch (noah@leadboat.com) wrote:

On Sat, Oct 17, 2015 at 06:17:37PM -0400, Robert Haas wrote:

people are willing to run critters in that mode, I will be happy -
more than happy, really - to put the test code into committable form,
guarded by a #define, and away we go.

I would make one such animal.

We're also looking at what animals it makes sense to run as part of
pginfra and I expect we'd be able to include an animal for these tests
also (though Stefan is the one really driving that effort).

Thanks!

Stephen

#7Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#4)
Re: a raft of parallelism-related bug fixes

On 10/17/2015 06:17 PM, Robert Haas wrote:

However, I'm pretty sure that we don't want to switch the *entire*
buildfarm to using lots of unnecessary parallelism. What we might be
able to do is have some critters that people spin up for this precise
purpose. Just like we currently have CLOBBER_CACHE_ALWAYS buildfarm
members, we could have GRATUITOUSLY_PARALLEL buildfarm members. If
Andrew is willing to add buildfarm support for that option and a few
people are willing to run critters in that mode, I will be happy -
more than happy, really - to put the test code into committable form,
guarded by a #define, and away we go.

If all that is required is a #define, like CLOBBER_CACHE_ALWAYS, then no
special buildfarm support is required - you would just add that to the
animal's config file, more or less like this:

config_env =>
{
CPPFLAGS => '-DGRATUITOUSLY_PARALLEL',
},

I try to make things easy :-)

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#7)
Re: a raft of parallelism-related bug fixes

On Sat, Oct 17, 2015 at 9:16 PM, Andrew Dunstan <andrew@dunslane.net> wrote:

If all that is required is a #define, like CLOBBER_CACHE_ALWAYS, then no
special buildfarm support is required - you would just add that to the
animal's config file, more or less like this:

config_env =>
{
CPPFLAGS => '-DGRATUITOUSLY_PARALLEL',
},

I try to make things easy :-)

Wow, that's great. So, I'll try to rework the test code I posted
previously into something less hacky, and eventually add a #define
like this so we can run it on the buildfarm. There's a few other
things that need to get done before that really makes sense - like
getting the rest of the bug fix patches committed - otherwise any
buildfarm critters we add will just be permanently red.

Thanks to Noah and Stephen for your replies also - it is good to hear
that if I spend the time to make this committable, somebody will use
it.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#4)
1 attachment(s)
Re: a raft of parallelism-related bug fixes

On Sat, Oct 17, 2015 at 6:17 PM, Robert Haas <robertmhaas@gmail.com> wrote:

It's good to have your perspective on how this can be improved, and
I'm definitely willing to write more documentation. Any lack in that
area is probably due to being too close to the subject area, having
spent several years on parallelism in general, and 200+ emails on
parallel sequential scan in particular. Your point about the lack of
a good header file comment for execParallel.c is a good one, and I'll
rectify that next week.

Here is a patch to add a hopefully-useful file header comment to
execParallel.c. I included one for nodeGather.c as well, which seems
to be contrary to previous practice, but actually it seems like
previous practice is not the greatest: surely it's not self-evident
what all of the executor nodes do.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

parallel-exec-header-comments.patchapplication/x-patch; name=parallel-exec-header-comments.patchDownload
diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c
index 3bb8206..d99e170 100644
--- a/src/backend/executor/execParallel.c
+++ b/src/backend/executor/execParallel.c
@@ -6,6 +6,14 @@
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
+ * This file contains routines that are intended to support setting up,
+ * using, and tearing down a ParallelContext from within the PostgreSQL
+ * executor.  The ParallelContext machinery will handle starting the
+ * workers and ensuring that their state generally matches that of the
+ * leader; see src/backend/access/transam/README.parallel for details.
+ * However, we must save and restore relevant executor state, such as
+ * any ParamListInfo associated witih the query, buffer usage info, and
+ * the actual plan to be passed down to the worker.
  *
  * IDENTIFICATION
  *	  src/backend/executor/execParallel.c
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 7e2272f..017adf2 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -6,6 +6,20 @@
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
+ * A Gather executor launches parallel workers to run multiple copies of a
+ * plan.  It can also run the plan itself, if the workers are not available
+ * or have not started up yet.  It then merges all of the results it produces
+ * and the results from the workers into a single output stream.  Therefore,
+ * it will normally be used with a plan where running multiple copies of the
+ * same plan does not produce duplicate output, such as PartialSeqScan.
+ *
+ * Alternatively, a Gather node can be configured to use just one worker
+ * and the single-copy flag can be set.  In this case, the Gather node will
+ * run the plan in one worker and will not execute the plan itself.  In
+ * this case, it simply returns whatever tuples were returned by the worker.
+ * If a worker cannot be obtained, then it will run the plan itself and
+ * return the results.  Therefore, a plan used with a single-copy Gather
+ * node not be parallel-aware.
  *
  * IDENTIFICATION
  *	  src/backend/executor/nodeGather.c
#10Simon Riggs
simon@2ndQuadrant.com
In reply to: Robert Haas (#4)
Re: a raft of parallelism-related bug fixes

On 17 October 2015 at 18:17, Robert Haas <robertmhaas@gmail.com> wrote:

It's good to have your perspective on how this can be improved, and
I'm definitely willing to write more documentation. Any lack in that
area is probably due to being too close to the subject area, having
spent several years on parallelism in general, and 200+ emails on
parallel sequential scan in particular. Your point about the lack of
a good header file comment for execParallel.c is a good one, and I'll
rectify that next week.

Not on your case in a big way, just noting the need for change there.

I'll help as well, but if you could start with enough basics to allow me to
ask questions that will help. Thanks.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#11Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#9)
Re: a raft of parallelism-related bug fixes

On Tue, Oct 20, 2015 at 8:16 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sat, Oct 17, 2015 at 6:17 PM, Robert Haas <robertmhaas@gmail.com>

wrote:

It's good to have your perspective on how this can be improved, and
I'm definitely willing to write more documentation. Any lack in that
area is probably due to being too close to the subject area, having
spent several years on parallelism in general, and 200+ emails on
parallel sequential scan in particular. Your point about the lack of
a good header file comment for execParallel.c is a good one, and I'll
rectify that next week.

Here is a patch to add a hopefully-useful file header comment to
execParallel.c. I included one for nodeGather.c as well, which seems
to be contrary to previous practice, but actually it seems like
previous practice is not the greatest: surely it's not self-evident
what all of the executor nodes do.

+ * any ParamListInfo associated witih the query, buffer usage info, and
+ * the actual plan to be passed down to the worker.

typo 'witih'.

+ * return the results.  Therefore, a plan used with a single-copy Gather
+ * node not be parallel-aware.

"node not" seems to be incomplete.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#12Amit Langote
amitlangote09@gmail.com
In reply to: Amit Kapila (#11)
Re: a raft of parallelism-related bug fixes

On Wednesday, 21 October 2015, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 20, 2015 at 8:16 PM, Robert Haas <robertmhaas@gmail.com
<javascript:_e(%7B%7D,'cvml','robertmhaas@gmail.com');>> wrote:

On Sat, Oct 17, 2015 at 6:17 PM, Robert Haas <robertmhaas@gmail.com

<javascript:_e(%7B%7D,'cvml','robertmhaas@gmail.com');>> wrote:

It's good to have your perspective on how this can be improved, and
I'm definitely willing to write more documentation. Any lack in that
area is probably due to being too close to the subject area, having
spent several years on parallelism in general, and 200+ emails on
parallel sequential scan in particular. Your point about the lack of
a good header file comment for execParallel.c is a good one, and I'll
rectify that next week.

Here is a patch to add a hopefully-useful file header comment to
execParallel.c. I included one for nodeGather.c as well, which seems
to be contrary to previous practice, but actually it seems like
previous practice is not the greatest: surely it's not self-evident
what all of the executor nodes do.

+ * any ParamListInfo associated witih the query, buffer usage info, and
+ * the actual plan to be passed down to the worker.

typo 'witih'.

+ * return the results.  Therefore, a plan used with a single-copy Gather
+ * node not be parallel-aware.

"node not" seems to be incomplete.

... node *need* not be parallel aware?

Thanks,
Amit

#13Robert Haas
robertmhaas@gmail.com
In reply to: Amit Langote (#12)
Re: a raft of parallelism-related bug fixes

On Wed, Oct 21, 2015 at 9:04 AM, Amit Langote <amitlangote09@gmail.com> wrote:

... node *need* not be parallel aware?

Yes, thanks. Committed that way.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#10)
Re: a raft of parallelism-related bug fixes

On Tue, Oct 20, 2015 at 6:12 PM, Simon Riggs <simon@2ndquadrant.com> wrote:

Not on your case in a big way, just noting the need for change there.

Yes, I appreciate your attitude. I think we are on the same wavelength.

I'll help as well, but if you could start with enough basics to allow me to
ask questions that will help. Thanks.

Will try to keep pushing in that direction. May be easier once some
of the dust has settled.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#4)
1 attachment(s)
Re: a raft of parallelism-related bug fixes

On Sun, Oct 18, 2015 at 12:17 AM, Robert Haas <robertmhaas@gmail.com> wrote:

So reviewing patch 13 isn't possible without prior knowledge.

The basic question for patch 13 is whether ephemeral record types can
occur in executor tuples in any contexts that I haven't identified. I
know that a tuple table slot can contain have a column that is of type
record or record[], and those records can themselves contain
attributes of type record or record[], and so on as far down as you
like. I *think* that's the only case. For example, I don't believe
that a TupleTableSlot can contain a *named* record type that has an
anonymous record buried down inside of it somehow. But I'm not
positive I'm right about that.

I have done some more testing and investigation and determined that
this optimism was unwarranted. It turns out that the type information
for composite and record types gets stored in two different places.
First, the TupleTableSlot has a type OID, indicating the sort of the
value it expects to be stored for that slot attribute. Second, the
value itself contains a type OID and typmod. And these don't have to
match. For example, consider this query:

select row_to_json(i) from int8_tbl i(x,y);

Without i(x,y), the HeapTuple passed to row_to_json is labelled with
the pg_type OID of int8_tbl. But with the query as written, it's
labeled as an anonymous record type. If I jigger things by hacking
the code so that this is planned as Gather (single-copy) -> SeqScan,
with row_to_json evaluated at the Gather node, then the sequential
scan kicks out a tuple with a transient record type and stores it into
a slot whose type OID is still that of int8_tbl. My previous patch
failed to deal with that; the attached one does.

The previous patch was also defective in a few other respects. The
most significant of those, maybe, is that it somehow thought it was OK
to assume that transient typmods from all workers could be treated
interchangeably rather than individually. To fix this, I've changed
the TupleQueueFunnel implemented by tqueue.c to be merely a
TupleQueueReader which handles reading from a single worker only.
nodeGather.c therefore creates one TupleQueueReader per worker instead
of a single TupleQueueFunnel for all workers; accordingly, the logic
for multiplexing multiple queues now lives in nodeGather.c. This is
probably how I should have done it originally - someone, I think Jeff
Davis - complained previously that tqueue.c had no business embedding
the round-robin policy decision, and he was right. So this addresses
that complaint as well.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

tqueue-record-types-v2.patchapplication/x-patch; name=tqueue-record-types-v2.patchDownload
From db5b2a90ec35adf3f5fac72483679ebcefdb29af Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Wed, 7 Oct 2015 12:43:22 -0400
Subject: [PATCH 7/8] Modify tqueue infrastructure to support transient record
 types.

Commit 4a4e6893aa080b9094dadbe0e65f8a75fee41ac6, which introduced this
mechanism, failed to account for the fact that the RECORD pseudo-type
uses transient typmods that are only meaningful within a single
backend.  Transferring such tuples without modification between two
cooperating backends does not work.  This commit installs a system
for passing the tuple descriptors over the same shm_mq being used to
send the tuples themselves.  The two sides might not assign the same
transient typmod to any given tuple descriptor, so we must also
substitute the appropriate receiver-side typmod for the one used by
the sender.  That adds some CPU overhead, but still seems better than
being unable to pass records between cooperating parallel processes.

Along the way, move the logic for handling multiple tuple queues from
tqueue.c to nodeGather.c; tqueue.c now provides a TupleQueueReader,
which reads from a single queue, rather than a TupleQueueFunnel, which
potentially reads from multiple queues.  This change was suggested
previously as a way to make sure that nodeGather.c rather than tqueue.c
had policy control over the order in which to read from queues, but
it wasn't clear to me until now how good an idea it was.  typmod
mapping needs to be performed separately for each queue, and it is
much simpler if the tqueue.c code handles that and leaves multiplexing
multiple queues to higher layers of the stack.
---
 src/backend/executor/nodeGather.c | 139 ++++--
 src/backend/executor/tqueue.c     | 977 +++++++++++++++++++++++++++++++++-----
 src/include/executor/tqueue.h     |  12 +-
 src/include/nodes/execnodes.h     |   4 +-
 src/tools/pgindent/typedefs.list  |   2 +-
 5 files changed, 980 insertions(+), 154 deletions(-)

diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 9c1533e..312302a 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -36,11 +36,13 @@
 #include "executor/nodeGather.h"
 #include "executor/nodeSubplan.h"
 #include "executor/tqueue.h"
+#include "miscadmin.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
 
 static TupleTableSlot *gather_getnext(GatherState *gatherstate);
+static HeapTuple gather_readnext(GatherState *gatherstate);
 
 
 /* ----------------------------------------------------------------
@@ -124,6 +126,7 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
 TupleTableSlot *
 ExecGather(GatherState *node)
 {
+	TupleTableSlot *fslot = node->funnel_slot;
 	int			i;
 	TupleTableSlot *slot;
 	TupleTableSlot *resultSlot;
@@ -147,6 +150,7 @@ ExecGather(GatherState *node)
 		 */
 		if (gather->num_workers > 0 && IsInParallelMode())
 		{
+			ParallelContext *pcxt;
 			bool	got_any_worker = false;
 
 			/* Initialize the workers required to execute Gather node. */
@@ -158,18 +162,26 @@ ExecGather(GatherState *node)
 			 * Register backend workers. We might not get as many as we
 			 * requested, or indeed any at all.
 			 */
-			LaunchParallelWorkers(node->pei->pcxt);
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
 
-			/* Set up a tuple queue to collect the results. */
-			node->funnel = CreateTupleQueueFunnel();
-			for (i = 0; i < node->pei->pcxt->nworkers; ++i)
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers > 0)
 			{
-				if (node->pei->pcxt->worker[i].bgwhandle)
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers * sizeof(TupleQueueReader *));
+
+				for (i = 0; i < pcxt->nworkers; ++i)
 				{
+					if (pcxt->worker[i].bgwhandle == NULL)
+						continue;
+
 					shm_mq_set_handle(node->pei->tqueue[i],
-									  node->pei->pcxt->worker[i].bgwhandle);
-					RegisterTupleQueueOnFunnel(node->funnel,
-											   node->pei->tqueue[i]);
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   fslot->tts_tupleDescriptor);
 					got_any_worker = true;
 				}
 			}
@@ -180,7 +192,7 @@ ExecGather(GatherState *node)
 		}
 
 		/* Run plan locally if no workers or not single-copy. */
-		node->need_to_scan_locally = (node->funnel == NULL)
+		node->need_to_scan_locally = (node->reader == NULL)
 			|| !gather->single_copy;
 		node->initialized = true;
 	}
@@ -252,13 +264,9 @@ ExecEndGather(GatherState *node)
 }
 
 /*
- * gather_getnext
- *
- * Get the next tuple from shared memory queue.  This function
- * is responsible for fetching tuples from all the queues associated
- * with worker backends used in Gather node execution and if there is
- * no data available from queues or no worker is available, it does
- * fetch the data from local node.
+ * Read the next tuple.  We might fetch a tuple from one of the tuple queues
+ * using gather_readnext, or if no tuple queue contains a tuple and the
+ * single_copy flag is not set, we might generate one locally instead.
  */
 static TupleTableSlot *
 gather_getnext(GatherState *gatherstate)
@@ -268,19 +276,11 @@ gather_getnext(GatherState *gatherstate)
 	TupleTableSlot *fslot = gatherstate->funnel_slot;
 	HeapTuple	tup;
 
-	while (gatherstate->funnel != NULL || gatherstate->need_to_scan_locally)
+	while (gatherstate->reader != NULL || gatherstate->need_to_scan_locally)
 	{
-		if (gatherstate->funnel != NULL)
+		if (gatherstate->reader != NULL)
 		{
-			bool		done = false;
-
-			/* wait only if local scan is done */
-			tup = TupleQueueFunnelNext(gatherstate->funnel,
-									   gatherstate->need_to_scan_locally,
-									   &done);
-			if (done)
-				ExecShutdownGather(gatherstate);
-
+			tup = gather_readnext(gatherstate);
 			if (HeapTupleIsValid(tup))
 			{
 				ExecStoreTuple(tup,		/* tuple to store */
@@ -307,6 +307,80 @@ gather_getnext(GatherState *gatherstate)
 	return ExecClearTuple(fslot);
 }
 
+/*
+ * Attempt to read a tuple from one of our parallel workers.
+ */
+static HeapTuple
+gather_readnext(GatherState *gatherstate)
+{
+	int		waitpos = gatherstate->nextreader;
+
+	for (;;)
+	{
+		TupleQueueReader *reader;
+		HeapTuple	tup;
+		bool		readerdone;
+
+		/* Make sure we've read all messages from workers. */
+		HandleParallelMessages();
+
+		/* Attempt to read a tuple, but don't block if none is available. */
+		reader = gatherstate->reader[gatherstate->nextreader];
+		tup = TupleQueueReaderNext(reader, true, &readerdone);
+
+		/*
+		 * If this reader is done, remove it.  If all readers are done,
+		 * clean up remaining worker state.
+		 */
+		if (readerdone)
+		{
+			DestroyTupleQueueReader(reader);
+			--gatherstate->nreaders;
+			if (gatherstate->nreaders == 0)
+			{
+				ExecShutdownGather(gatherstate);
+				return NULL;
+			}
+			else
+			{
+				memmove(&gatherstate->reader[gatherstate->nextreader],
+						&gatherstate->reader[gatherstate->nextreader + 1],
+						sizeof(TupleQueueReader *)
+						* (gatherstate->nreaders - gatherstate->nextreader));
+				if (gatherstate->nextreader >= gatherstate->nreaders)
+					gatherstate->nextreader = 0;
+				if (gatherstate->nextreader < waitpos)
+					--waitpos;
+			}
+			continue;
+		}
+
+		/* Advance nextreader pointer in round-robin fashion. */
+		gatherstate->nextreader =
+			(gatherstate->nextreader + 1) % gatherstate->nreaders;
+
+		/* If we got a tuple, return it. */
+		if (tup)
+			return tup;
+
+		/* Have we visited every TupleQueueReader? */
+		if (gatherstate->nextreader == waitpos)
+		{
+			/*
+			 * If (still) running plan locally, return NULL so caller can
+			 * generate another tuple from the local copy of the plan.
+			 */
+			if (gatherstate->need_to_scan_locally)
+				return NULL;
+
+			/* Nothing to do except wait for developments. */
+			WaitLatch(MyLatch, WL_LATCH_SET, 0);
+			CHECK_FOR_INTERRUPTS();
+			ResetLatch(MyLatch);
+		}
+	}
+}
+
 /* ----------------------------------------------------------------
  *		ExecShutdownGather
  *
@@ -318,11 +392,14 @@ gather_getnext(GatherState *gatherstate)
 void
 ExecShutdownGather(GatherState *node)
 {
-	/* Shut down tuple queue funnel before shutting down workers. */
-	if (node->funnel != NULL)
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
 	{
-		DestroyTupleQueueFunnel(node->funnel);
-		node->funnel = NULL;
+		int		i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			DestroyTupleQueueReader(node->reader[i]);
+		node->reader = NULL;
 	}
 
 	/* Now shut down the workers. */
diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c
index 67143d3..1b326e8 100644
--- a/src/backend/executor/tqueue.c
+++ b/src/backend/executor/tqueue.c
@@ -4,10 +4,15 @@
  *	  Use shm_mq to send & receive tuples between parallel backends
  *
  * A DestReceiver of type DestTupleQueue, which is a TQueueDestReceiver
- * under the hood, writes tuples from the executor to a shm_mq.
+ * under the hood, writes tuples from the executor to a shm_mq.  If
+ * necessary, it also writes control messages describing transient
+ * record types used within the tuple.
  *
- * A TupleQueueFunnel helps manage the process of reading tuples from
- * one or more shm_mq objects being used as tuple queues.
+ * A TupleQueueReader reads tuples, and if any are sent control messages,
+ * from a shm_mq and returns the tuples.  If transient record types are
+ * in use, it registers those types based on the received control messages
+ * and rewrites the typemods sent by the remote side to the corresponding
+ * local record typemods.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -21,37 +26,404 @@
 #include "postgres.h"
 
 #include "access/htup_details.h"
+#include "catalog/pg_type.h"
 #include "executor/tqueue.h"
+#include "funcapi.h"
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
+#include "utils/array.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/rangetypes.h"
+#include "utils/syscache.h"
+#include "utils/typcache.h"
+
+typedef enum
+{
+	TQUEUE_REMAP_NONE,			/* no special processing required */
+	TQUEUE_REMAP_ARRAY,			/* array */
+	TQUEUE_REMAP_RANGE,			/* range */
+	TQUEUE_REMAP_RECORD			/* composite type, named or anonymous */
+}	RemapClass;
+
+typedef struct
+{
+	int			natts;
+	RemapClass	mapping[FLEXIBLE_ARRAY_MEMBER];
+}	RemapInfo;
 
 typedef struct
 {
 	DestReceiver pub;
 	shm_mq_handle *handle;
+	MemoryContext tmpcontext;
+	HTAB	   *recordhtab;
+	char		mode;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
 }	TQueueDestReceiver;
 
-struct TupleQueueFunnel
+typedef struct RecordTypemodMap
 {
-	int			nqueues;
-	int			maxqueues;
-	int			nextqueue;
-	shm_mq_handle **queue;
+	int			remotetypmod;
+	int			localtypmod;
+}	RecordTypemodMap;
+
+struct TupleQueueReader
+{
+	shm_mq_handle *queue;
+	char		mode;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
+	HTAB	   *typmodmap;
 };
 
+#define		TUPLE_QUEUE_MODE_CONTROL			'c'
+#define		TUPLE_QUEUE_MODE_DATA				'd'
+
+static void tqueueWalk(TQueueDestReceiver * tqueue, RemapClass walktype,
+		   Datum value);
+static void tqueueWalkRecord(TQueueDestReceiver * tqueue, Datum value);
+static void tqueueWalkArray(TQueueDestReceiver * tqueue, Datum value);
+static void tqueueWalkRange(TQueueDestReceiver * tqueue, Datum value);
+static void tqueueSendTypmodInfo(TQueueDestReceiver * tqueue, int typmod,
+					 TupleDesc tupledesc);
+static void TupleQueueHandleControlMessage(TupleQueueReader *reader,
+							   Size nbytes, char *data);
+static HeapTuple TupleQueueHandleDataMessage(TupleQueueReader *reader,
+							Size nbytes, HeapTupleHeader data);
+static HeapTuple TupleQueueRemapTuple(TupleQueueReader *reader,
+					 TupleDesc tupledesc, RemapInfo * remapinfo,
+					 HeapTuple tuple);
+static Datum TupleQueueRemap(TupleQueueReader *reader, RemapClass remapclass,
+				Datum value);
+static Datum TupleQueueRemapArray(TupleQueueReader *reader, Datum value);
+static Datum TupleQueueRemapRange(TupleQueueReader *reader, Datum value);
+static Datum TupleQueueRemapRecord(TupleQueueReader *reader, Datum value);
+static RemapClass GetRemapClass(Oid typeid);
+static RemapInfo *BuildRemapInfo(TupleDesc tupledesc);
+
 /*
  * Receive a tuple.
+ *
+ * This is, at core, pretty simple: just send the tuple to the designated
+ * shm_mq.  The complicated part is that if the tuple contains transient
+ * record types (see lookup_rowtype_tupdesc), we need to send control
+ * information to the shm_mq receiver so that those typemods can be correctly
+ * interpreted, as they are merely held in a backend-local cache.  Worse, the
+ * record type may not at the top level: we could have a range over an array
+ * type over a range type over a range type over an array type over a record,
+ * or something like that.
  */
 static void
 tqueueReceiveSlot(TupleTableSlot *slot, DestReceiver *self)
 {
 	TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self;
+	TupleDesc	tupledesc = slot->tts_tupleDescriptor;
 	HeapTuple	tuple;
+	HeapTupleHeader tup;
+
+	/*
+	 * Test to see whether the tupledesc has changed; if so, set up for the
+	 * new tupledesc.  This is a strange test both because the executor really
+	 * shouldn't change the tupledesc, and also because it would be unsafe if
+	 * the old tupledesc could be freed and a new one allocated at the same
+	 * address.  But since some very old code in printtup.c uses this test, we
+	 * adopt it here as well.
+	 */
+	if (tqueue->tupledesc != tupledesc ||
+		tqueue->remapinfo->natts != tupledesc->natts)
+	{
+		if (tqueue->remapinfo != NULL)
+			pfree(tqueue->remapinfo);
+		tqueue->remapinfo = BuildRemapInfo(tupledesc);
+	}
 
 	tuple = ExecMaterializeSlot(slot);
+	tup = tuple->t_data;
+
+	/*
+	 * When, because of the types being transmitted, no record typemod mapping
+	 * can be needed, we can skip a good deal of work.
+	 */
+	if (tqueue->remapinfo != NULL)
+	{
+		RemapInfo  *remapinfo = tqueue->remapinfo;
+		AttrNumber	i;
+		MemoryContext oldcontext = NULL;
+
+		/* Deform the tuple so we can examine it, if not done already. */
+		slot_getallattrs(slot);
+
+		/* Iterate over each attribute and search it for transient typemods. */
+		Assert(slot->tts_tupleDescriptor->natts == remapinfo->natts);
+		for (i = 0; i < remapinfo->natts; ++i)
+		{
+			/* Ignore nulls and types that don't need special handling. */
+			if (slot->tts_isnull[i] ||
+				remapinfo->mapping[i] == TQUEUE_REMAP_NONE)
+				continue;
+
+			/* Switch to temporary memory context to avoid leaking. */
+			if (oldcontext == NULL)
+			{
+				if (tqueue->tmpcontext == NULL)
+					tqueue->tmpcontext =
+						AllocSetContextCreate(TopMemoryContext,
+											  "tqueue temporary context",
+											  ALLOCSET_DEFAULT_MINSIZE,
+											  ALLOCSET_DEFAULT_INITSIZE,
+											  ALLOCSET_DEFAULT_MAXSIZE);
+				oldcontext = MemoryContextSwitchTo(tqueue->tmpcontext);
+			}
+
+			/* Invoke the appropriate walker function. */
+			tqueueWalk(tqueue, remapinfo->mapping[i], slot->tts_values[i]);
+		}
+
+		/* If we used the temp context, reset it and restore prior context. */
+		if (oldcontext != NULL)
+		{
+			MemoryContextSwitchTo(oldcontext);
+			MemoryContextReset(tqueue->tmpcontext);
+		}
+
+		/* If we entered control mode, switch back to data mode. */
+		if (tqueue->mode != TUPLE_QUEUE_MODE_DATA)
+		{
+			tqueue->mode = TUPLE_QUEUE_MODE_DATA;
+			shm_mq_send(tqueue->handle, sizeof(char), &tqueue->mode, false);
+		}
+	}
+
+	/* Send the tuple itself. */
 	shm_mq_send(tqueue->handle, tuple->t_len, tuple->t_data, false);
 }
 
 /*
+ * Invoke the appropriate walker function based on the given RemapClass.
+ */
+static void
+tqueueWalk(TQueueDestReceiver * tqueue, RemapClass walktype, Datum value)
+{
+	check_stack_depth();
+
+	switch (walktype)
+	{
+		case TQUEUE_REMAP_NONE:
+			break;
+		case TQUEUE_REMAP_ARRAY:
+			tqueueWalkArray(tqueue, value);
+			break;
+		case TQUEUE_REMAP_RANGE:
+			tqueueWalkRange(tqueue, value);
+			break;
+		case TQUEUE_REMAP_RECORD:
+			tqueueWalkRecord(tqueue, value);
+			break;
+	}
+}
+
+/*
+ * Walk a record and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkRecord(TQueueDestReceiver * tqueue, Datum value)
+{
+	HeapTupleHeader tup;
+	Oid			typeid;
+	Oid			typmod;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
+
+	/* Extract typmod from tuple. */
+	tup = DatumGetHeapTupleHeader(value);
+	typeid = HeapTupleHeaderGetTypeId(tup);
+	typmod = HeapTupleHeaderGetTypMod(tup);
+
+	/* Look up tuple descriptor in typecache. */
+	tupledesc = lookup_rowtype_tupdesc(typeid, typmod);
+
+	/*
+	 * If this is a transient record time, send its TupleDesc as a control
+	 * message.  (tqueueSendTypemodInfo is smart enough to do this only once
+	 * per typmod.)
+	 */
+	if (typeid == RECORDOID)
+		tqueueSendTypmodInfo(tqueue, typmod, tupledesc);
+
+	/*
+	 * Build the remap information for this tupledesc.  We might want to think
+	 * about keeping a cache of this information keyed by typeid and typemod,
+	 * but let's keep it simple for now.
+	 */
+	remapinfo = BuildRemapInfo(tupledesc);
+
+	/*
+	 * If remapping is required, deform the tuple and process each field. When
+	 * BuildRemapInfo is null, the data types are such that there can be no
+	 * transient record types here, so we can skip all this work.
+	 */
+	if (remapinfo != NULL)
+	{
+		Datum	   *values;
+		bool	   *isnull;
+		HeapTupleData tdata;
+		AttrNumber	i;
+
+		/* Deform the tuple so we can check each column within. */
+		values = palloc(tupledesc->natts * sizeof(Datum));
+		isnull = palloc(tupledesc->natts * sizeof(bool));
+		tdata.t_len = HeapTupleHeaderGetDatumLength(tup);
+		ItemPointerSetInvalid(&(tdata.t_self));
+		tdata.t_tableOid = InvalidOid;
+		tdata.t_data = tup;
+		heap_deform_tuple(&tdata, tupledesc, values, isnull);
+
+		/* Recursively check each non-NULL attribute. */
+		for (i = 0; i < tupledesc->natts; ++i)
+			if (!isnull[i])
+				tqueueWalk(tqueue, remapinfo->mapping[i], values[i]);
+	}
+
+	/* Release reference count acquired by lookup_rowtype_tupdesc. */
+	DecrTupleDescRefCount(tupledesc);
+}
+
+/*
+ * Walk a record and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkArray(TQueueDestReceiver * tqueue, Datum value)
+{
+	ArrayType  *arr = DatumGetArrayTypeP(value);
+	Oid			typeid = ARR_ELEMTYPE(arr);
+	RemapClass	remapclass;
+	int16		typlen;
+	bool		typbyval;
+	char		typalign;
+	Datum	   *elem_values;
+	bool	   *elem_nulls;
+	int			num_elems;
+	int			i;
+
+	remapclass = GetRemapClass(typeid);
+
+	/*
+	 * If the elements of the array don't need to be walked, we shouldn't have
+	 * been called in the first place: GetRemapClass should have returned NULL
+	 * when asked about this array type.
+	 */
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Deconstruct the array. */
+	get_typlenbyvalalign(typeid, &typlen, &typbyval, &typalign);
+	deconstruct_array(arr, typeid, typlen, typbyval, typalign,
+					  &elem_values, &elem_nulls, &num_elems);
+
+	/* Walk each element. */
+	for (i = 0; i < num_elems; ++i)
+		if (!elem_nulls[i])
+			tqueueWalk(tqueue, remapclass, elem_values[i]);
+}
+
+/*
+ * Walk a range type and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkRange(TQueueDestReceiver * tqueue, Datum value)
+{
+	RangeType  *range = DatumGetRangeType(value);
+	Oid			typeid = RangeTypeGetOid(range);
+	RemapClass	remapclass;
+	TypeCacheEntry *typcache;
+	RangeBound	lower;
+	RangeBound	upper;
+	bool		empty;
+
+	/*
+	 * Extract the lower and upper bounds.  It might be worth implementing
+	 * some caching scheme here so that we don't look up the same typeids in
+	 * the type cache repeatedly, but for now let's keep it simple.
+	 */
+	typcache = lookup_type_cache(typeid, TYPECACHE_RANGE_INFO);
+	if (typcache->rngelemtype == NULL)
+		elog(ERROR, "type %u is not a range type", typeid);
+	range_deserialize(typcache, range, &lower, &upper, &empty);
+
+	/* Nothing to do for an empty range. */
+	if (empty)
+		return;
+
+	/*
+	 * If the range bounds don't need to be walked, we shouldn't have been
+	 * called in the first place: GetRemapClass should have returned NULL when
+	 * asked about this range type.
+	 */
+	remapclass = GetRemapClass(typeid);
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Walk each bound, if present. */
+	if (!upper.infinite)
+		tqueueWalk(tqueue, remapclass, upper.val);
+	if (!lower.infinite)
+		tqueueWalk(tqueue, remapclass, lower.val);
+}
+
+/*
+ * Send tuple descriptor information for a transient typemod, unless we've
+ * already done so previously.
+ */
+static void
+tqueueSendTypmodInfo(TQueueDestReceiver * tqueue, int typmod,
+					 TupleDesc tupledesc)
+{
+	StringInfoData buf;
+	bool		found;
+	AttrNumber	i;
+
+	/* Initialize hash table if not done yet. */
+	if (tqueue->recordhtab == NULL)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(int);
+		ctl.entrysize = sizeof(int);
+		ctl.hcxt = TopMemoryContext;
+		tqueue->recordhtab = hash_create("tqueue record hashtable",
+										 100, &ctl, HASH_ELEM | HASH_CONTEXT);
+	}
+
+	/* Have we already seen this record type?  If not, must report it. */
+	hash_search(tqueue->recordhtab, &typmod, HASH_ENTER, &found);
+	if (found)
+		return;
+
+	/* If message queue is in data mode, switch to control mode. */
+	if (tqueue->mode != TUPLE_QUEUE_MODE_CONTROL)
+	{
+		tqueue->mode = TUPLE_QUEUE_MODE_CONTROL;
+		shm_mq_send(tqueue->handle, sizeof(char), &tqueue->mode, false);
+	}
+
+	/* Assemble a control message. */
+	initStringInfo(&buf);
+	appendBinaryStringInfo(&buf, (char *) &typmod, sizeof(int));
+	appendBinaryStringInfo(&buf, (char *) &tupledesc->natts, sizeof(int));
+	appendBinaryStringInfo(&buf, (char *) &tupledesc->tdhasoid,
+						   sizeof(bool));
+	for (i = 0; i < tupledesc->natts; ++i)
+		appendBinaryStringInfo(&buf, (char *) tupledesc->attrs[i],
+							   sizeof(FormData_pg_attribute));
+
+	/* Send control message. */
+	shm_mq_send(tqueue->handle, buf.len, buf.data, false);
+}
+
+/*
  * Prepare to receive tuples from executor.
  */
 static void
@@ -77,6 +449,14 @@ tqueueShutdownReceiver(DestReceiver *self)
 static void
 tqueueDestroyReceiver(DestReceiver *self)
 {
+	TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self;
+
+	if (tqueue->tmpcontext != NULL)
+		MemoryContextDelete(tqueue->tmpcontext);
+	if (tqueue->recordhtab != NULL)
+		hash_destroy(tqueue->recordhtab);
+	if (tqueue->remapinfo != NULL)
+		pfree(tqueue->remapinfo);
 	pfree(self);
 }
 
@@ -96,169 +476,536 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle)
 	self->pub.rDestroy = tqueueDestroyReceiver;
 	self->pub.mydest = DestTupleQueue;
 	self->handle = handle;
+	self->tmpcontext = NULL;
+	self->recordhtab = NULL;
+	self->mode = TUPLE_QUEUE_MODE_DATA;
+	self->remapinfo = NULL;
 
 	return (DestReceiver *) self;
 }
 
 /*
- * Create a tuple queue funnel.
+ * Create a tuple queue reader.
  */
-TupleQueueFunnel *
-CreateTupleQueueFunnel(void)
+TupleQueueReader *
+CreateTupleQueueReader(shm_mq_handle *handle, TupleDesc tupledesc)
 {
-	TupleQueueFunnel *funnel = palloc0(sizeof(TupleQueueFunnel));
+	TupleQueueReader *reader = palloc0(sizeof(TupleQueueReader));
 
-	funnel->maxqueues = 8;
-	funnel->queue = palloc(funnel->maxqueues * sizeof(shm_mq_handle *));
+	reader->queue = handle;
+	reader->mode = TUPLE_QUEUE_MODE_DATA;
+	reader->tupledesc = tupledesc;
+	reader->remapinfo = BuildRemapInfo(tupledesc);
 
-	return funnel;
+	return reader;
 }
 
 /*
- * Destroy a tuple queue funnel.
+ * Destroy a tuple queue reader.
  */
 void
-DestroyTupleQueueFunnel(TupleQueueFunnel *funnel)
+DestroyTupleQueueReader(TupleQueueReader *reader)
 {
-	int			i;
+	shm_mq_detach(shm_mq_get_queue(reader->queue));
+	if (reader->remapinfo != NULL)
+		pfree(reader->remapinfo);
+	pfree(reader);
+}
+
+/*
+ * Fetch a tuple from a tuple queue reader.
+ *
+ * Even when shm_mq_receive() returns SHM_MQ_WOULD_BLOCK, this can still
+ * accumulate bytes from a partially-read message, so it's useful to call
+ * this with nowait = true even if nothing is returned.
+ *
+ * The return value is NULL if there are no remaining queues or if
+ * nowait = true and no tuple is ready to return.  *done, if not NULL,
+ * is set to true when queue is detached and otherwise to false.
+ */
+HeapTuple
+TupleQueueReaderNext(TupleQueueReader *reader, bool nowait, bool *done)
+{
+	shm_mq_result result;
+
+	if (done != NULL)
+		*done = false;
+
+	for (;;)
+	{
+		Size		nbytes;
+		void	   *data;
+
+		/* Attempt to read a message. */
+		result = shm_mq_receive(reader->queue, &nbytes, &data, true);
 
-	for (i = 0; i < funnel->nqueues; i++)
-		shm_mq_detach(shm_mq_get_queue(funnel->queue[i]));
-	pfree(funnel->queue);
-	pfree(funnel);
+		/* If queue is detached, set *done and return NULL. */
+		if (result == SHM_MQ_DETACHED)
+		{
+			if (done != NULL)
+				*done = true;
+			return NULL;
+		}
+
+		/* In non-blocking mode, bail out if no message ready yet. */
+		if (result == SHM_MQ_WOULD_BLOCK)
+			return NULL;
+		Assert(result == SHM_MQ_SUCCESS);
+
+		/*
+		 * OK, we got a message.  Process it.
+		 *
+		 * One-byte messages are mode switch messages, so that we can switch
+		 * between "control" and "data" mode.  When in "data" mode, each
+		 * message (unless exactly one byte) is a tuple.  When in "control"
+		 * mode, each message provides a transient-typmod-to-tupledesc mapping
+		 * so we can interpret future tuples.
+		 */
+		if (nbytes == 1)
+		{
+			/* Mode switch message. */
+			reader->mode = ((char *) data)[0];
+		}
+		else if (reader->mode == TUPLE_QUEUE_MODE_DATA)
+		{
+			/* Tuple data. */
+			return TupleQueueHandleDataMessage(reader, nbytes, data);
+		}
+		else if (reader->mode == TUPLE_QUEUE_MODE_CONTROL)
+		{
+			/* Control message, describing a transient record type. */
+			TupleQueueHandleControlMessage(reader, nbytes, data);
+		}
+		else
+			elog(ERROR, "invalid mode: %d", (int) reader->mode);
+	}
 }
 
 /*
- * Remember the shared memory queue handle in funnel.
+ * Handle a data message - that is, a tuple - from the remote side.
  */
-void
-RegisterTupleQueueOnFunnel(TupleQueueFunnel *funnel, shm_mq_handle *handle)
+static HeapTuple
+TupleQueueHandleDataMessage(TupleQueueReader *reader,
+							Size nbytes,
+							HeapTupleHeader data)
 {
-	if (funnel->nqueues < funnel->maxqueues)
+	HeapTupleData htup;
+
+	ItemPointerSetInvalid(&htup.t_self);
+	htup.t_tableOid = InvalidOid;
+	htup.t_len = nbytes;
+	htup.t_data = data;
+
+	return TupleQueueRemapTuple(reader, reader->tupledesc, reader->remapinfo,
+								&htup);
+}
+
+/*
+ * Remap tuple typmods per control information received from remote side.
+ */
+static HeapTuple
+TupleQueueRemapTuple(TupleQueueReader *reader, TupleDesc tupledesc,
+					 RemapInfo * remapinfo, HeapTuple tuple)
+{
+	Datum	   *values;
+	bool	   *isnull;
+	bool		dirty = false;
+	int			i;
+
+	/*
+	 * If no remapping is necessary, just copy the tuple into a single
+	 * palloc'd chunk, as caller will expect.
+	 */
+	if (remapinfo == NULL)
+		return heap_copytuple(tuple);
+
+	/* Deform tuple so we can remap record typmods for individual attrs. */
+	values = palloc(tupledesc->natts * sizeof(Datum));
+	isnull = palloc(tupledesc->natts * sizeof(bool));
+	heap_deform_tuple(tuple, tupledesc, values, isnull);
+	Assert(tupledesc->natts == remapinfo->natts);
+
+	/* Recursively check each non-NULL attribute. */
+	for (i = 0; i < tupledesc->natts; ++i)
 	{
-		funnel->queue[funnel->nqueues++] = handle;
-		return;
+		if (isnull[i] || remapinfo->mapping[i] == TQUEUE_REMAP_NONE)
+			continue;
+		values[i] = TupleQueueRemap(reader, remapinfo->mapping[i], values[i]);
+		dirty = true;
 	}
 
-	if (funnel->nqueues >= funnel->maxqueues)
+	/* Reform the modified tuple. */
+	return heap_form_tuple(tupledesc, values, isnull);
+}
+
+/*
+ * Remap a value based on the specified remap class.
+ */
+static Datum
+TupleQueueRemap(TupleQueueReader *reader, RemapClass remapclass, Datum value)
+{
+	check_stack_depth();
+
+	switch (remapclass)
 	{
-		int			newsize = funnel->nqueues * 2;
+		case TQUEUE_REMAP_NONE:
+			/* caller probably shouldn't have called us at all, but... */
+			return value;
+
+		case TQUEUE_REMAP_ARRAY:
+			return TupleQueueRemapArray(reader, value);
 
-		Assert(funnel->nqueues == funnel->maxqueues);
+		case TQUEUE_REMAP_RANGE:
+			return TupleQueueRemapRange(reader, value);
 
-		funnel->queue = repalloc(funnel->queue,
-								 newsize * sizeof(shm_mq_handle *));
-		funnel->maxqueues = newsize;
+		case TQUEUE_REMAP_RECORD:
+			return TupleQueueRemapRecord(reader, value);
 	}
+}
+
+/*
+ * Remap an array.
+ */
+static Datum
+TupleQueueRemapArray(TupleQueueReader *reader, Datum value)
+{
+	ArrayType  *arr = DatumGetArrayTypeP(value);
+	Oid			typeid = ARR_ELEMTYPE(arr);
+	RemapClass	remapclass;
+	int16		typlen;
+	bool		typbyval;
+	char		typalign;
+	Datum	   *elem_values;
+	bool	   *elem_nulls;
+	int			num_elems;
+	int			i;
 
-	funnel->queue[funnel->nqueues++] = handle;
+	remapclass = GetRemapClass(typeid);
+
+	/*
+	 * If the elements of the array don't need to be walked, we shouldn't have
+	 * been called in the first place: GetRemapClass should have returned NULL
+	 * when asked about this array type.
+	 */
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Deconstruct the array. */
+	get_typlenbyvalalign(typeid, &typlen, &typbyval, &typalign);
+	deconstruct_array(arr, typeid, typlen, typbyval, typalign,
+					  &elem_values, &elem_nulls, &num_elems);
+
+	/* Remap each element. */
+	for (i = 0; i < num_elems; ++i)
+		if (!elem_nulls[i])
+			elem_values[i] = TupleQueueRemap(reader, remapclass,
+											 elem_values[i]);
+
+	/* Reconstruct and return the array.  */
+	arr = construct_md_array(elem_values, elem_nulls,
+							 ARR_NDIM(arr), ARR_DIMS(arr), ARR_LBOUND(arr),
+							 typeid, typlen, typbyval, typalign);
+	return PointerGetDatum(arr);
 }
 
 /*
- * Fetch a tuple from a tuple queue funnel.
- *
- * We try to read from the queues in round-robin fashion so as to avoid
- * the situation where some workers get their tuples read expediently while
- * others are barely ever serviced.
- *
- * Even when nowait = false, we read from the individual queues in
- * non-blocking mode.  Even when shm_mq_receive() returns SHM_MQ_WOULD_BLOCK,
- * it can still accumulate bytes from a partially-read message, so doing it
- * this way should outperform doing a blocking read on each queue in turn.
+ * Remap a range type.
+ */
+static Datum
+TupleQueueRemapRange(TupleQueueReader *reader, Datum value)
+{
+	RangeType  *range = DatumGetRangeType(value);
+	Oid			typeid = RangeTypeGetOid(range);
+	RemapClass	remapclass;
+	TypeCacheEntry *typcache;
+	RangeBound	lower;
+	RangeBound	upper;
+	bool		empty;
+
+	/*
+	 * Extract the lower and upper bounds.  As in tqueueWalkRange, some
+	 * caching might be a good idea here.
+	 */
+	typcache = lookup_type_cache(typeid, TYPECACHE_RANGE_INFO);
+	if (typcache->rngelemtype == NULL)
+		elog(ERROR, "type %u is not a range type", typeid);
+	range_deserialize(typcache, range, &lower, &upper, &empty);
+
+	/* Nothing to do for an empty range. */
+	if (empty)
+		return value;
+
+	/*
+	 * If the range bounds don't need to be walked, we shouldn't have been
+	 * called in the first place: GetRemapClass should have returned NULL when
+	 * asked about this range type.
+	 */
+	remapclass = GetRemapClass(typeid);
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Remap each bound, if present. */
+	if (!upper.infinite)
+		upper.val = TupleQueueRemap(reader, remapclass, upper.val);
+	if (!lower.infinite)
+		lower.val = TupleQueueRemap(reader, remapclass, lower.val);
+
+	/* And reserialize. */
+	range = range_serialize(typcache, &lower, &upper, empty);
+	return RangeTypeGetDatum(range);
+}
+
+/*
+ * Remap a record.
+ */
+static Datum
+TupleQueueRemapRecord(TupleQueueReader *reader, Datum value)
+{
+	HeapTupleHeader tup;
+	Oid			typeid;
+	int			typmod;
+	RecordTypemodMap *mapent;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
+	HeapTupleData htup;
+	HeapTuple	atup;
+
+	/* Fetch type OID and typemod. */
+	tup = DatumGetHeapTupleHeader(value);
+	typeid = HeapTupleHeaderGetTypeId(tup);
+	typmod = HeapTupleHeaderGetTypMod(tup);
+
+	/* If transient record, replace remote typmod with local typmod. */
+	if (typeid == RECORDOID)
+	{
+		Assert(reader->typmodmap != NULL);
+		mapent = hash_search(reader->typmodmap, &typmod,
+							 HASH_FIND, NULL);
+		if (mapent == NULL)
+			elog(ERROR, "found unrecognized remote typmod %d", typmod);
+		typmod = mapent->localtypmod;
+	}
+
+	/*
+	 * Fetch tupledesc and compute remap info.  We should probably cache this
+	 * so that we don't have to keep recomputing it.
+	 */
+	tupledesc = lookup_rowtype_tupdesc(typeid, typmod);
+	remapinfo = BuildRemapInfo(tupledesc);
+	DecrTupleDescRefCount(tupledesc);
+
+	/* Remap tuple. */
+	ItemPointerSetInvalid(&htup.t_self);
+	htup.t_tableOid = InvalidOid;
+	htup.t_len = HeapTupleHeaderGetDatumLength(tup);
+	htup.t_data = tup;
+	atup = TupleQueueRemapTuple(reader, tupledesc, remapinfo, &htup);
+	HeapTupleHeaderSetTypeId(atup->t_data, typeid);
+	HeapTupleHeaderSetTypMod(atup->t_data, typmod);
+	HeapTupleHeaderSetDatumLength(atup->t_data, htup.t_len);
+
+	/* And return the results. */
+	return HeapTupleHeaderGetDatum(atup->t_data);
+}
+
+/*
+ * Handle a control message from the tuple queue reader.
  *
- * The return value is NULL if there are no remaining queues or if
- * nowait = true and no queue returned a tuple without blocking.  *done, if
- * not NULL, is set to true when there are no remaining queues and false in
- * any other case.
+ * Control messages are sent when the remote side is sending tuples that
+ * contain transient record types.  We need to arrange to bless those
+ * record types locally and translate between remote and local typmods.
  */
-HeapTuple
-TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
+static void
+TupleQueueHandleControlMessage(TupleQueueReader *reader, Size nbytes,
+							   char *data)
 {
-	int			waitpos = funnel->nextqueue;
+	int			natts;
+	int			remotetypmod;
+	bool		hasoid;
+	char	   *buf = data;
+	int			rc = 0;
+	int			i;
+	Form_pg_attribute *attrs;
+	MemoryContext oldcontext;
+	TupleDesc	tupledesc;
+	RecordTypemodMap *mapent;
+	bool		found;
+
+	/* Extract remote typmod. */
+	memcpy(&remotetypmod, &buf[rc], sizeof(int));
+	rc += sizeof(int);
+
+	/* Extract attribute count. */
+	memcpy(&natts, &buf[rc], sizeof(int));
+	rc += sizeof(int);
+
+	/* Extract hasoid flag. */
+	memcpy(&hasoid, &buf[rc], sizeof(bool));
+	rc += sizeof(bool);
+
+	/* Extract attribute details. */
+	oldcontext = MemoryContextSwitchTo(CurTransactionContext);
+	attrs = palloc(natts * sizeof(Form_pg_attribute));
+	for (i = 0; i < natts; ++i)
+	{
+		attrs[i] = palloc(sizeof(FormData_pg_attribute));
+		memcpy(attrs[i], &buf[rc], sizeof(FormData_pg_attribute));
+		rc += sizeof(FormData_pg_attribute);
+	}
+	MemoryContextSwitchTo(oldcontext);
+
+	/* We should have read the whole message. */
+	Assert(rc == nbytes);
+
+	/* Construct TupleDesc. */
+	tupledesc = CreateTupleDesc(natts, hasoid, attrs);
+	tupledesc = BlessTupleDesc(tupledesc);
 
-	/* Corner case: called before adding any queues, or after all are gone. */
-	if (funnel->nqueues == 0)
+	/* Create map if it doesn't exist already. */
+	if (reader->typmodmap == NULL)
 	{
-		if (done != NULL)
-			*done = true;
-		return NULL;
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(int);
+		ctl.entrysize = sizeof(RecordTypemodMap);
+		ctl.hcxt = CurTransactionContext;
+		reader->typmodmap = hash_create("typmodmap hashtable",
+										100, &ctl, HASH_ELEM | HASH_CONTEXT);
 	}
 
-	if (done != NULL)
-		*done = false;
+	/* Create map entry. */
+	mapent = hash_search(reader->typmodmap, &remotetypmod, HASH_ENTER,
+						 &found);
+	if (found)
+		elog(ERROR, "duplicate message for typmod %d",
+			 remotetypmod);
+	mapent->localtypmod = tupledesc->tdtypmod;
+	elog(DEBUG3, "mapping remote typmod %d to local typmod %d",
+		 remotetypmod, tupledesc->tdtypmod);
+}
 
-	for (;;)
+/*
+ * Build a mapping indicating what remapping class applies to each attribute
+ * described by a tupledesc.
+ */
+static RemapInfo *
+BuildRemapInfo(TupleDesc tupledesc)
+{
+	RemapInfo  *remapinfo;
+	Size		size;
+	AttrNumber	i;
+	bool		noop = true;
+	StringInfoData buf;
+
+	initStringInfo(&buf);
+
+	size = offsetof(RemapInfo, mapping) +
+		sizeof(RemapClass) * tupledesc->natts;
+	remapinfo = MemoryContextAllocZero(TopMemoryContext, size);
+	remapinfo->natts = tupledesc->natts;
+	for (i = 0; i < tupledesc->natts; ++i)
 	{
-		shm_mq_handle *mqh = funnel->queue[funnel->nextqueue];
-		shm_mq_result result;
-		Size		nbytes;
-		void	   *data;
+		Form_pg_attribute attr = tupledesc->attrs[i];
 
-		/* Attempt to read a message. */
-		result = shm_mq_receive(mqh, &nbytes, &data, true);
+		remapinfo->mapping[i] = GetRemapClass(attr->atttypid);
+		if (remapinfo->mapping[i] != TQUEUE_REMAP_NONE)
+			noop = false;
+	}
 
-		/*
-		 * Normally, we advance funnel->nextqueue to the next queue at this
-		 * point, but if we're pointing to a queue that we've just discovered
-		 * is detached, then forget that queue and leave the pointer where it
-		 * is until the number of remaining queues fall below that pointer and
-		 * at that point make the pointer point to the first queue.
-		 */
-		if (result != SHM_MQ_DETACHED)
-			funnel->nextqueue = (funnel->nextqueue + 1) % funnel->nqueues;
-		else
-		{
-			--funnel->nqueues;
-			if (funnel->nqueues == 0)
-			{
-				if (done != NULL)
-					*done = true;
-				return NULL;
-			}
+	if (noop)
+	{
+		appendStringInfo(&buf, "noop");
+		pfree(remapinfo);
+		remapinfo = NULL;
+	}
 
-			memmove(&funnel->queue[funnel->nextqueue],
-					&funnel->queue[funnel->nextqueue + 1],
-					sizeof(shm_mq_handle *)
-					* (funnel->nqueues - funnel->nextqueue));
+	return remapinfo;
+}
 
-			if (funnel->nextqueue >= funnel->nqueues)
-				funnel->nextqueue = 0;
+/*
+ * Determine the remap class assocociated with a particular data type.
+ *
+ * Transient record types need to have the typmod applied on the sending side
+ * replaced with a value on the receiving side that has the same meaning.
+ *
+ * Arrays, range types, and all record types (including named composite types)
+ * need to searched for transient record values buried within them.
+ * Surprisingly, a walker is required even when the indicated type is a
+ * composite type, because the actual value may be a compatible transient
+ * record type.
+ */
+static RemapClass
+GetRemapClass(Oid typeid)
+{
+	RemapClass	forceResult = TQUEUE_REMAP_NONE;
+	RemapClass	innerResult = TQUEUE_REMAP_NONE;
 
-			if (funnel->nextqueue < waitpos)
-				--waitpos;
+	for (;;)
+	{
+		HeapTuple	tup;
+		Form_pg_type typ;
 
+		/* Simple cases. */
+		if (typeid == RECORDOID)
+		{
+			innerResult = TQUEUE_REMAP_RECORD;
+			break;
+		}
+		if (typeid == RECORDARRAYOID)
+		{
+			innerResult = TQUEUE_REMAP_ARRAY;
+			break;
+		}
+
+		/* Otherwise, we need a syscache lookup to figure it out. */
+		tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typeid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for type %u", typeid);
+		typ = (Form_pg_type) GETSTRUCT(tup);
+
+		/* Look through domains to underlying base type. */
+		if (typ->typtype == TYPTYPE_DOMAIN)
+		{
+			typeid = typ->typbasetype;
+			ReleaseSysCache(tup);
 			continue;
 		}
 
-		/* If we got a message, return it. */
-		if (result == SHM_MQ_SUCCESS)
+		/*
+		 * Look through arrays to underlying base type, but the final return
+		 * value must be either TQUEUE_REMAP_ARRAY or TQUEUE_REMAP_NONE.  (If
+		 * this is an array of integers, for example, we don't need to walk
+		 * it.)
+		 */
+		if (OidIsValid(typ->typelem) && typ->typlen == -1)
 		{
-			HeapTupleData htup;
-
-			/*
-			 * The tuple data we just read from the queue is only valid until
-			 * we again attempt to read from it.  Copy the tuple into a single
-			 * palloc'd chunk as callers will expect.
-			 */
-			ItemPointerSetInvalid(&htup.t_self);
-			htup.t_tableOid = InvalidOid;
-			htup.t_len = nbytes;
-			htup.t_data = data;
-			return heap_copytuple(&htup);
+			typeid = typ->typelem;
+			ReleaseSysCache(tup);
+			if (forceResult == TQUEUE_REMAP_NONE)
+				forceResult = TQUEUE_REMAP_ARRAY;
+			continue;
 		}
 
 		/*
-		 * If we've visited all of the queues, then we should either give up
-		 * and return NULL (if we're in non-blocking mode) or wait for the
-		 * process latch to be set (otherwise).
+		 * Similarly, look through ranges to the underlying base type, but the
+		 * final return value must be either TQUEUE_REMAP_RANGE or
+		 * TQUEUE_REMAP_NONE.
 		 */
-		if (funnel->nextqueue == waitpos)
+		if (typ->typtype == TYPTYPE_RANGE)
 		{
-			if (nowait)
-				return NULL;
-			WaitLatch(MyLatch, WL_LATCH_SET, 0);
-			CHECK_FOR_INTERRUPTS();
-			ResetLatch(MyLatch);
+			ReleaseSysCache(tup);
+			if (forceResult == TQUEUE_REMAP_NONE)
+				forceResult = TQUEUE_REMAP_RANGE;
+			typeid = get_range_subtype(typeid);
+			continue;
 		}
+
+		/* Walk composite types.  Nothing else needs special handling. */
+		if (typ->typtype == TYPTYPE_COMPOSITE)
+			innerResult = TQUEUE_REMAP_RECORD;
+		ReleaseSysCache(tup);
+		break;
 	}
+
+	if (innerResult != TQUEUE_REMAP_NONE && forceResult != TQUEUE_REMAP_NONE)
+		return forceResult;
+	return innerResult;
 }
diff --git a/src/include/executor/tqueue.h b/src/include/executor/tqueue.h
index 6f8eb73..6a668fa 100644
--- a/src/include/executor/tqueue.h
+++ b/src/include/executor/tqueue.h
@@ -21,11 +21,11 @@
 extern DestReceiver *CreateTupleQueueDestReceiver(shm_mq_handle *handle);
 
 /* Use these to receive tuples from a shm_mq. */
-typedef struct TupleQueueFunnel TupleQueueFunnel;
-extern TupleQueueFunnel *CreateTupleQueueFunnel(void);
-extern void DestroyTupleQueueFunnel(TupleQueueFunnel *funnel);
-extern void RegisterTupleQueueOnFunnel(TupleQueueFunnel *, shm_mq_handle *);
-extern HeapTuple TupleQueueFunnelNext(TupleQueueFunnel *, bool nowait,
-					 bool *done);
+typedef struct TupleQueueReader TupleQueueReader;
+extern TupleQueueReader *CreateTupleQueueReader(shm_mq_handle *handle,
+					   TupleDesc tupledesc);
+extern void DestroyTupleQueueReader(TupleQueueReader *funnel);
+extern HeapTuple TupleQueueReaderNext(TupleQueueReader *,
+					 bool nowait, bool *done);
 
 #endif   /* TQUEUE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 939bc0e..58ec889 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1963,7 +1963,9 @@ typedef struct GatherState
 	PlanState	ps;				/* its first field is NodeTag */
 	bool		initialized;
 	struct ParallelExecutorInfo *pei;
-	struct TupleQueueFunnel *funnel;
+	int			nreaders;
+	int			nextreader;
+	struct TupleQueueReader **reader;
 	TupleTableSlot *funnel_slot;
 	bool		need_to_scan_locally;
 } GatherState;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index feb821b..03e1d2c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2018,7 +2018,7 @@ TupleHashEntry
 TupleHashEntryData
 TupleHashIterator
 TupleHashTable
-TupleQueueFunnel
+TupleQueueReader
 TupleTableSlot
 Tuplesortstate
 Tuplestorestate
-- 
2.3.8 (Apple Git-58)

#16Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#15)
1 attachment(s)
Re: a raft of parallelism-related bug fixes

On Wed, Oct 28, 2015 at 10:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sun, Oct 18, 2015 at 12:17 AM, Robert Haas <robertmhaas@gmail.com> wrote:

So reviewing patch 13 isn't possible without prior knowledge.

The basic question for patch 13 is whether ephemeral record types can
occur in executor tuples in any contexts that I haven't identified. I
know that a tuple table slot can contain have a column that is of type
record or record[], and those records can themselves contain
attributes of type record or record[], and so on as far down as you
like. I *think* that's the only case. For example, I don't believe
that a TupleTableSlot can contain a *named* record type that has an
anonymous record buried down inside of it somehow. But I'm not
positive I'm right about that.

I have done some more testing and investigation and determined that
this optimism was unwarranted. It turns out that the type information
for composite and record types gets stored in two different places.
First, the TupleTableSlot has a type OID, indicating the sort of the
value it expects to be stored for that slot attribute. Second, the
value itself contains a type OID and typmod. And these don't have to
match. For example, consider this query:

select row_to_json(i) from int8_tbl i(x,y);

Without i(x,y), the HeapTuple passed to row_to_json is labelled with
the pg_type OID of int8_tbl. But with the query as written, it's
labeled as an anonymous record type. If I jigger things by hacking
the code so that this is planned as Gather (single-copy) -> SeqScan,
with row_to_json evaluated at the Gather node, then the sequential
scan kicks out a tuple with a transient record type and stores it into
a slot whose type OID is still that of int8_tbl. My previous patch
failed to deal with that; the attached one does.

The previous patch was also defective in a few other respects. The
most significant of those, maybe, is that it somehow thought it was OK
to assume that transient typmods from all workers could be treated
interchangeably rather than individually. To fix this, I've changed
the TupleQueueFunnel implemented by tqueue.c to be merely a
TupleQueueReader which handles reading from a single worker only.
nodeGather.c therefore creates one TupleQueueReader per worker instead
of a single TupleQueueFunnel for all workers; accordingly, the logic
for multiplexing multiple queues now lives in nodeGather.c. This is
probably how I should have done it originally - someone, I think Jeff
Davis - complained previously that tqueue.c had no business embedding
the round-robin policy decision, and he was right. So this addresses
that complaint as well.

Here is an updated version. This is rebased over recent commits, and
I added a missing check for attisdropped.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

tqueue-record-types-v3.patchtext/x-diff; charset=US-ASCII; name=tqueue-record-types-v3.patchDownload
From fa31300a884cc942d22c66d6a30fa4c2fcba3c6f Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Wed, 7 Oct 2015 12:43:22 -0400
Subject: [PATCH 5/5] Modify tqueue infrastructure to support transient record
 types.

Commit 4a4e6893aa080b9094dadbe0e65f8a75fee41ac6, which introduced this
mechanism, failed to account for the fact that the RECORD pseudo-type
uses transient typmods that are only meaningful within a single
backend.  Transferring such tuples without modification between two
cooperating backends does not work.  This commit installs a system
for passing the tuple descriptors over the same shm_mq being used to
send the tuples themselves.  The two sides might not assign the same
transient typmod to any given tuple descriptor, so we must also
substitute the appropriate receiver-side typmod for the one used by
the sender.  That adds some CPU overhead, but still seems better than
being unable to pass records between cooperating parallel processes.

Along the way, move the logic for handling multiple tuple queues from
tqueue.c to nodeGather.c; tqueue.c now provides a TupleQueueReader,
which reads from a single queue, rather than a TupleQueueFunnel, which
potentially reads from multiple queues.  This change was suggested
previously as a way to make sure that nodeGather.c rather than tqueue.c
had policy control over the order in which to read from queues, but
it wasn't clear to me until now how good an idea it was.  typmod
mapping needs to be performed separately for each queue, and it is
much simpler if the tqueue.c code handles that and leaves multiplexing
multiple queues to higher layers of the stack.
---
 src/backend/executor/nodeGather.c | 138 ++++--
 src/backend/executor/tqueue.c     | 983 +++++++++++++++++++++++++++++++++-----
 src/include/executor/tqueue.h     |  12 +-
 src/include/nodes/execnodes.h     |   4 +-
 src/tools/pgindent/typedefs.list  |   2 +-
 5 files changed, 986 insertions(+), 153 deletions(-)

diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 5f58961..850c67e 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -36,11 +36,13 @@
 #include "executor/nodeGather.h"
 #include "executor/nodeSubplan.h"
 #include "executor/tqueue.h"
+#include "miscadmin.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
 
 static TupleTableSlot *gather_getnext(GatherState *gatherstate);
+static HeapTuple gather_readnext(GatherState *gatherstate);
 static void ExecShutdownGatherWorkers(GatherState *node);
 
 
@@ -125,6 +127,7 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
 TupleTableSlot *
 ExecGather(GatherState *node)
 {
+	TupleTableSlot *fslot = node->funnel_slot;
 	int			i;
 	TupleTableSlot *slot;
 	TupleTableSlot *resultSlot;
@@ -148,6 +151,7 @@ ExecGather(GatherState *node)
 		 */
 		if (gather->num_workers > 0 && IsInParallelMode())
 		{
+			ParallelContext *pcxt;
 			bool	got_any_worker = false;
 
 			/* Initialize the workers required to execute Gather node. */
@@ -160,18 +164,26 @@ ExecGather(GatherState *node)
 			 * Register backend workers. We might not get as many as we
 			 * requested, or indeed any at all.
 			 */
-			LaunchParallelWorkers(node->pei->pcxt);
+			pcxt = node->pei->pcxt;
+			LaunchParallelWorkers(pcxt);
 
-			/* Set up a tuple queue to collect the results. */
-			node->funnel = CreateTupleQueueFunnel();
-			for (i = 0; i < node->pei->pcxt->nworkers; ++i)
+			/* Set up tuple queue readers to read the results. */
+			if (pcxt->nworkers > 0)
 			{
-				if (node->pei->pcxt->worker[i].bgwhandle)
+				node->nreaders = 0;
+				node->reader =
+					palloc(pcxt->nworkers * sizeof(TupleQueueReader *));
+
+				for (i = 0; i < pcxt->nworkers; ++i)
 				{
+					if (pcxt->worker[i].bgwhandle == NULL)
+						continue;
+
 					shm_mq_set_handle(node->pei->tqueue[i],
-									  node->pei->pcxt->worker[i].bgwhandle);
-					RegisterTupleQueueOnFunnel(node->funnel,
-											   node->pei->tqueue[i]);
+									  pcxt->worker[i].bgwhandle);
+					node->reader[node->nreaders++] =
+						CreateTupleQueueReader(node->pei->tqueue[i],
+											   fslot->tts_tupleDescriptor);
 					got_any_worker = true;
 				}
 			}
@@ -182,7 +194,7 @@ ExecGather(GatherState *node)
 		}
 
 		/* Run plan locally if no workers or not single-copy. */
-		node->need_to_scan_locally = (node->funnel == NULL)
+		node->need_to_scan_locally = (node->reader == NULL)
 			|| !gather->single_copy;
 		node->initialized = true;
 	}
@@ -254,13 +266,9 @@ ExecEndGather(GatherState *node)
 }
 
 /*
- * gather_getnext
- *
- * Get the next tuple from shared memory queue.  This function
- * is responsible for fetching tuples from all the queues associated
- * with worker backends used in Gather node execution and if there is
- * no data available from queues or no worker is available, it does
- * fetch the data from local node.
+ * Read the next tuple.  We might fetch a tuple from one of the tuple queues
+ * using gather_readnext, or if no tuple queue contains a tuple and the
+ * single_copy flag is not set, we might generate one locally instead.
  */
 static TupleTableSlot *
 gather_getnext(GatherState *gatherstate)
@@ -270,18 +278,11 @@ gather_getnext(GatherState *gatherstate)
 	TupleTableSlot *fslot = gatherstate->funnel_slot;
 	HeapTuple	tup;
 
-	while (gatherstate->funnel != NULL || gatherstate->need_to_scan_locally)
+	while (gatherstate->reader != NULL || gatherstate->need_to_scan_locally)
 	{
-		if (gatherstate->funnel != NULL)
+		if (gatherstate->reader != NULL)
 		{
-			bool		done = false;
-
-			/* wait only if local scan is done */
-			tup = TupleQueueFunnelNext(gatherstate->funnel,
-									   gatherstate->need_to_scan_locally,
-									   &done);
-			if (done)
-				ExecShutdownGatherWorkers(gatherstate);
+			tup = gather_readnext(gatherstate);
 
 			if (HeapTupleIsValid(tup))
 			{
@@ -309,6 +310,80 @@ gather_getnext(GatherState *gatherstate)
 	return ExecClearTuple(fslot);
 }
 
+/*
+ * Attempt to read a tuple from one of our parallel workers.
+ */
+static HeapTuple
+gather_readnext(GatherState *gatherstate)
+{
+	int		waitpos = gatherstate->nextreader;
+
+	for (;;)
+	{
+		TupleQueueReader *reader;
+		HeapTuple	tup;
+		bool		readerdone;
+
+		/* Make sure we've read all messages from workers. */
+		HandleParallelMessages();
+
+		/* Attempt to read a tuple, but don't block if none is available. */
+		reader = gatherstate->reader[gatherstate->nextreader];
+		tup = TupleQueueReaderNext(reader, true, &readerdone);
+
+		/*
+		 * If this reader is done, remove it.  If all readers are done,
+		 * clean up remaining worker state.
+		 */
+		if (readerdone)
+		{
+			DestroyTupleQueueReader(reader);
+			--gatherstate->nreaders;
+			if (gatherstate->nreaders == 0)
+			{
+				ExecShutdownGather(gatherstate);
+				return NULL;
+			}
+			else
+			{
+				memmove(&gatherstate->reader[gatherstate->nextreader],
+						&gatherstate->reader[gatherstate->nextreader + 1],
+						sizeof(TupleQueueReader *)
+						* (gatherstate->nreaders - gatherstate->nextreader));
+				if (gatherstate->nextreader >= gatherstate->nreaders)
+					gatherstate->nextreader = 0;
+				if (gatherstate->nextreader < waitpos)
+					--waitpos;
+			}
+			continue;
+		}
+
+		/* Advance nextreader pointer in round-robin fashion. */
+		gatherstate->nextreader =
+			(gatherstate->nextreader + 1) % gatherstate->nreaders;
+
+		/* If we got a tuple, return it. */
+		if (tup)
+			return tup;
+
+		/* Have we visited every TupleQueueReader? */
+		if (gatherstate->nextreader == waitpos)
+		{
+			/*
+			 * If (still) running plan locally, return NULL so caller can
+			 * generate another tuple from the local copy of the plan.
+			 */
+			if (gatherstate->need_to_scan_locally)
+				return NULL;
+
+			/* Nothing to do except wait for developments. */
+			WaitLatch(MyLatch, WL_LATCH_SET, 0);
+			CHECK_FOR_INTERRUPTS();
+			ResetLatch(MyLatch);
+		}
+	}
+}
+
 /* ----------------------------------------------------------------
  *		ExecShutdownGatherWorkers
  *
@@ -320,11 +395,14 @@ gather_getnext(GatherState *gatherstate)
 void
 ExecShutdownGatherWorkers(GatherState *node)
 {
-	/* Shut down tuple queue funnel before shutting down workers. */
-	if (node->funnel != NULL)
+	/* Shut down tuple queue readers before shutting down workers. */
+	if (node->reader != NULL)
 	{
-		DestroyTupleQueueFunnel(node->funnel);
-		node->funnel = NULL;
+		int		i;
+
+		for (i = 0; i < node->nreaders; ++i)
+			DestroyTupleQueueReader(node->reader[i]);
+		node->reader = NULL;
 	}
 
 	/* Now shut down the workers. */
diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c
index 67143d3..f465b1d 100644
--- a/src/backend/executor/tqueue.c
+++ b/src/backend/executor/tqueue.c
@@ -4,10 +4,15 @@
  *	  Use shm_mq to send & receive tuples between parallel backends
  *
  * A DestReceiver of type DestTupleQueue, which is a TQueueDestReceiver
- * under the hood, writes tuples from the executor to a shm_mq.
+ * under the hood, writes tuples from the executor to a shm_mq.  If
+ * necessary, it also writes control messages describing transient
+ * record types used within the tuple.
  *
- * A TupleQueueFunnel helps manage the process of reading tuples from
- * one or more shm_mq objects being used as tuple queues.
+ * A TupleQueueReader reads tuples, and if any are sent control messages,
+ * from a shm_mq and returns the tuples.  If transient record types are
+ * in use, it registers those types based on the received control messages
+ * and rewrites the typemods sent by the remote side to the corresponding
+ * local record typemods.
  *
  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -21,37 +26,404 @@
 #include "postgres.h"
 
 #include "access/htup_details.h"
+#include "catalog/pg_type.h"
 #include "executor/tqueue.h"
+#include "funcapi.h"
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
+#include "utils/array.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/rangetypes.h"
+#include "utils/syscache.h"
+#include "utils/typcache.h"
+
+typedef enum
+{
+	TQUEUE_REMAP_NONE,			/* no special processing required */
+	TQUEUE_REMAP_ARRAY,			/* array */
+	TQUEUE_REMAP_RANGE,			/* range */
+	TQUEUE_REMAP_RECORD			/* composite type, named or anonymous */
+}	RemapClass;
+
+typedef struct
+{
+	int			natts;
+	RemapClass	mapping[FLEXIBLE_ARRAY_MEMBER];
+}	RemapInfo;
 
 typedef struct
 {
 	DestReceiver pub;
 	shm_mq_handle *handle;
+	MemoryContext tmpcontext;
+	HTAB	   *recordhtab;
+	char		mode;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
 }	TQueueDestReceiver;
 
-struct TupleQueueFunnel
+typedef struct RecordTypemodMap
 {
-	int			nqueues;
-	int			maxqueues;
-	int			nextqueue;
-	shm_mq_handle **queue;
+	int			remotetypmod;
+	int			localtypmod;
+}	RecordTypemodMap;
+
+struct TupleQueueReader
+{
+	shm_mq_handle *queue;
+	char		mode;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
+	HTAB	   *typmodmap;
 };
 
+#define		TUPLE_QUEUE_MODE_CONTROL			'c'
+#define		TUPLE_QUEUE_MODE_DATA				'd'
+
+static void tqueueWalk(TQueueDestReceiver * tqueue, RemapClass walktype,
+		   Datum value);
+static void tqueueWalkRecord(TQueueDestReceiver * tqueue, Datum value);
+static void tqueueWalkArray(TQueueDestReceiver * tqueue, Datum value);
+static void tqueueWalkRange(TQueueDestReceiver * tqueue, Datum value);
+static void tqueueSendTypmodInfo(TQueueDestReceiver * tqueue, int typmod,
+					 TupleDesc tupledesc);
+static void TupleQueueHandleControlMessage(TupleQueueReader *reader,
+							   Size nbytes, char *data);
+static HeapTuple TupleQueueHandleDataMessage(TupleQueueReader *reader,
+							Size nbytes, HeapTupleHeader data);
+static HeapTuple TupleQueueRemapTuple(TupleQueueReader *reader,
+					 TupleDesc tupledesc, RemapInfo * remapinfo,
+					 HeapTuple tuple);
+static Datum TupleQueueRemap(TupleQueueReader *reader, RemapClass remapclass,
+				Datum value);
+static Datum TupleQueueRemapArray(TupleQueueReader *reader, Datum value);
+static Datum TupleQueueRemapRange(TupleQueueReader *reader, Datum value);
+static Datum TupleQueueRemapRecord(TupleQueueReader *reader, Datum value);
+static RemapClass GetRemapClass(Oid typeid);
+static RemapInfo *BuildRemapInfo(TupleDesc tupledesc);
+
 /*
  * Receive a tuple.
+ *
+ * This is, at core, pretty simple: just send the tuple to the designated
+ * shm_mq.  The complicated part is that if the tuple contains transient
+ * record types (see lookup_rowtype_tupdesc), we need to send control
+ * information to the shm_mq receiver so that those typemods can be correctly
+ * interpreted, as they are merely held in a backend-local cache.  Worse, the
+ * record type may not at the top level: we could have a range over an array
+ * type over a range type over a range type over an array type over a record,
+ * or something like that.
  */
 static void
 tqueueReceiveSlot(TupleTableSlot *slot, DestReceiver *self)
 {
 	TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self;
+	TupleDesc	tupledesc = slot->tts_tupleDescriptor;
 	HeapTuple	tuple;
+	HeapTupleHeader tup;
+
+	/*
+	 * Test to see whether the tupledesc has changed; if so, set up for the
+	 * new tupledesc.  This is a strange test both because the executor really
+	 * shouldn't change the tupledesc, and also because it would be unsafe if
+	 * the old tupledesc could be freed and a new one allocated at the same
+	 * address.  But since some very old code in printtup.c uses this test, we
+	 * adopt it here as well.
+	 */
+	if (tqueue->tupledesc != tupledesc ||
+		tqueue->remapinfo->natts != tupledesc->natts)
+	{
+		if (tqueue->remapinfo != NULL)
+			pfree(tqueue->remapinfo);
+		tqueue->remapinfo = BuildRemapInfo(tupledesc);
+	}
 
 	tuple = ExecMaterializeSlot(slot);
+	tup = tuple->t_data;
+
+	/*
+	 * When, because of the types being transmitted, no record typemod mapping
+	 * can be needed, we can skip a good deal of work.
+	 */
+	if (tqueue->remapinfo != NULL)
+	{
+		RemapInfo  *remapinfo = tqueue->remapinfo;
+		AttrNumber	i;
+		MemoryContext oldcontext = NULL;
+
+		/* Deform the tuple so we can examine it, if not done already. */
+		slot_getallattrs(slot);
+
+		/* Iterate over each attribute and search it for transient typemods. */
+		Assert(slot->tts_tupleDescriptor->natts == remapinfo->natts);
+		for (i = 0; i < remapinfo->natts; ++i)
+		{
+			/* Ignore nulls and types that don't need special handling. */
+			if (slot->tts_isnull[i] ||
+				remapinfo->mapping[i] == TQUEUE_REMAP_NONE)
+				continue;
+
+			/* Switch to temporary memory context to avoid leaking. */
+			if (oldcontext == NULL)
+			{
+				if (tqueue->tmpcontext == NULL)
+					tqueue->tmpcontext =
+						AllocSetContextCreate(TopMemoryContext,
+											  "tqueue temporary context",
+											  ALLOCSET_DEFAULT_MINSIZE,
+											  ALLOCSET_DEFAULT_INITSIZE,
+											  ALLOCSET_DEFAULT_MAXSIZE);
+				oldcontext = MemoryContextSwitchTo(tqueue->tmpcontext);
+			}
+
+			/* Invoke the appropriate walker function. */
+			tqueueWalk(tqueue, remapinfo->mapping[i], slot->tts_values[i]);
+		}
+
+		/* If we used the temp context, reset it and restore prior context. */
+		if (oldcontext != NULL)
+		{
+			MemoryContextSwitchTo(oldcontext);
+			MemoryContextReset(tqueue->tmpcontext);
+		}
+
+		/* If we entered control mode, switch back to data mode. */
+		if (tqueue->mode != TUPLE_QUEUE_MODE_DATA)
+		{
+			tqueue->mode = TUPLE_QUEUE_MODE_DATA;
+			shm_mq_send(tqueue->handle, sizeof(char), &tqueue->mode, false);
+		}
+	}
+
+	/* Send the tuple itself. */
 	shm_mq_send(tqueue->handle, tuple->t_len, tuple->t_data, false);
 }
 
 /*
+ * Invoke the appropriate walker function based on the given RemapClass.
+ */
+static void
+tqueueWalk(TQueueDestReceiver * tqueue, RemapClass walktype, Datum value)
+{
+	check_stack_depth();
+
+	switch (walktype)
+	{
+		case TQUEUE_REMAP_NONE:
+			break;
+		case TQUEUE_REMAP_ARRAY:
+			tqueueWalkArray(tqueue, value);
+			break;
+		case TQUEUE_REMAP_RANGE:
+			tqueueWalkRange(tqueue, value);
+			break;
+		case TQUEUE_REMAP_RECORD:
+			tqueueWalkRecord(tqueue, value);
+			break;
+	}
+}
+
+/*
+ * Walk a record and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkRecord(TQueueDestReceiver * tqueue, Datum value)
+{
+	HeapTupleHeader tup;
+	Oid			typeid;
+	Oid			typmod;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
+
+	/* Extract typmod from tuple. */
+	tup = DatumGetHeapTupleHeader(value);
+	typeid = HeapTupleHeaderGetTypeId(tup);
+	typmod = HeapTupleHeaderGetTypMod(tup);
+
+	/* Look up tuple descriptor in typecache. */
+	tupledesc = lookup_rowtype_tupdesc(typeid, typmod);
+
+	/*
+	 * If this is a transient record time, send its TupleDesc as a control
+	 * message.  (tqueueSendTypemodInfo is smart enough to do this only once
+	 * per typmod.)
+	 */
+	if (typeid == RECORDOID)
+		tqueueSendTypmodInfo(tqueue, typmod, tupledesc);
+
+	/*
+	 * Build the remap information for this tupledesc.  We might want to think
+	 * about keeping a cache of this information keyed by typeid and typemod,
+	 * but let's keep it simple for now.
+	 */
+	remapinfo = BuildRemapInfo(tupledesc);
+
+	/*
+	 * If remapping is required, deform the tuple and process each field. When
+	 * BuildRemapInfo is null, the data types are such that there can be no
+	 * transient record types here, so we can skip all this work.
+	 */
+	if (remapinfo != NULL)
+	{
+		Datum	   *values;
+		bool	   *isnull;
+		HeapTupleData tdata;
+		AttrNumber	i;
+
+		/* Deform the tuple so we can check each column within. */
+		values = palloc(tupledesc->natts * sizeof(Datum));
+		isnull = palloc(tupledesc->natts * sizeof(bool));
+		tdata.t_len = HeapTupleHeaderGetDatumLength(tup);
+		ItemPointerSetInvalid(&(tdata.t_self));
+		tdata.t_tableOid = InvalidOid;
+		tdata.t_data = tup;
+		heap_deform_tuple(&tdata, tupledesc, values, isnull);
+
+		/* Recursively check each non-NULL attribute. */
+		for (i = 0; i < tupledesc->natts; ++i)
+			if (!isnull[i])
+				tqueueWalk(tqueue, remapinfo->mapping[i], values[i]);
+	}
+
+	/* Release reference count acquired by lookup_rowtype_tupdesc. */
+	DecrTupleDescRefCount(tupledesc);
+}
+
+/*
+ * Walk a record and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkArray(TQueueDestReceiver * tqueue, Datum value)
+{
+	ArrayType  *arr = DatumGetArrayTypeP(value);
+	Oid			typeid = ARR_ELEMTYPE(arr);
+	RemapClass	remapclass;
+	int16		typlen;
+	bool		typbyval;
+	char		typalign;
+	Datum	   *elem_values;
+	bool	   *elem_nulls;
+	int			num_elems;
+	int			i;
+
+	remapclass = GetRemapClass(typeid);
+
+	/*
+	 * If the elements of the array don't need to be walked, we shouldn't have
+	 * been called in the first place: GetRemapClass should have returned NULL
+	 * when asked about this array type.
+	 */
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Deconstruct the array. */
+	get_typlenbyvalalign(typeid, &typlen, &typbyval, &typalign);
+	deconstruct_array(arr, typeid, typlen, typbyval, typalign,
+					  &elem_values, &elem_nulls, &num_elems);
+
+	/* Walk each element. */
+	for (i = 0; i < num_elems; ++i)
+		if (!elem_nulls[i])
+			tqueueWalk(tqueue, remapclass, elem_values[i]);
+}
+
+/*
+ * Walk a range type and send control messages for transient record types
+ * contained therein.
+ */
+static void
+tqueueWalkRange(TQueueDestReceiver * tqueue, Datum value)
+{
+	RangeType  *range = DatumGetRangeType(value);
+	Oid			typeid = RangeTypeGetOid(range);
+	RemapClass	remapclass;
+	TypeCacheEntry *typcache;
+	RangeBound	lower;
+	RangeBound	upper;
+	bool		empty;
+
+	/*
+	 * Extract the lower and upper bounds.  It might be worth implementing
+	 * some caching scheme here so that we don't look up the same typeids in
+	 * the type cache repeatedly, but for now let's keep it simple.
+	 */
+	typcache = lookup_type_cache(typeid, TYPECACHE_RANGE_INFO);
+	if (typcache->rngelemtype == NULL)
+		elog(ERROR, "type %u is not a range type", typeid);
+	range_deserialize(typcache, range, &lower, &upper, &empty);
+
+	/* Nothing to do for an empty range. */
+	if (empty)
+		return;
+
+	/*
+	 * If the range bounds don't need to be walked, we shouldn't have been
+	 * called in the first place: GetRemapClass should have returned NULL when
+	 * asked about this range type.
+	 */
+	remapclass = GetRemapClass(typeid);
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Walk each bound, if present. */
+	if (!upper.infinite)
+		tqueueWalk(tqueue, remapclass, upper.val);
+	if (!lower.infinite)
+		tqueueWalk(tqueue, remapclass, lower.val);
+}
+
+/*
+ * Send tuple descriptor information for a transient typemod, unless we've
+ * already done so previously.
+ */
+static void
+tqueueSendTypmodInfo(TQueueDestReceiver * tqueue, int typmod,
+					 TupleDesc tupledesc)
+{
+	StringInfoData buf;
+	bool		found;
+	AttrNumber	i;
+
+	/* Initialize hash table if not done yet. */
+	if (tqueue->recordhtab == NULL)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(int);
+		ctl.entrysize = sizeof(int);
+		ctl.hcxt = TopMemoryContext;
+		tqueue->recordhtab = hash_create("tqueue record hashtable",
+										 100, &ctl, HASH_ELEM | HASH_CONTEXT);
+	}
+
+	/* Have we already seen this record type?  If not, must report it. */
+	hash_search(tqueue->recordhtab, &typmod, HASH_ENTER, &found);
+	if (found)
+		return;
+
+	/* If message queue is in data mode, switch to control mode. */
+	if (tqueue->mode != TUPLE_QUEUE_MODE_CONTROL)
+	{
+		tqueue->mode = TUPLE_QUEUE_MODE_CONTROL;
+		shm_mq_send(tqueue->handle, sizeof(char), &tqueue->mode, false);
+	}
+
+	/* Assemble a control message. */
+	initStringInfo(&buf);
+	appendBinaryStringInfo(&buf, (char *) &typmod, sizeof(int));
+	appendBinaryStringInfo(&buf, (char *) &tupledesc->natts, sizeof(int));
+	appendBinaryStringInfo(&buf, (char *) &tupledesc->tdhasoid,
+						   sizeof(bool));
+	for (i = 0; i < tupledesc->natts; ++i)
+		appendBinaryStringInfo(&buf, (char *) tupledesc->attrs[i],
+							   sizeof(FormData_pg_attribute));
+
+	/* Send control message. */
+	shm_mq_send(tqueue->handle, buf.len, buf.data, false);
+}
+
+/*
  * Prepare to receive tuples from executor.
  */
 static void
@@ -77,6 +449,14 @@ tqueueShutdownReceiver(DestReceiver *self)
 static void
 tqueueDestroyReceiver(DestReceiver *self)
 {
+	TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self;
+
+	if (tqueue->tmpcontext != NULL)
+		MemoryContextDelete(tqueue->tmpcontext);
+	if (tqueue->recordhtab != NULL)
+		hash_destroy(tqueue->recordhtab);
+	if (tqueue->remapinfo != NULL)
+		pfree(tqueue->remapinfo);
 	pfree(self);
 }
 
@@ -96,169 +476,542 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle)
 	self->pub.rDestroy = tqueueDestroyReceiver;
 	self->pub.mydest = DestTupleQueue;
 	self->handle = handle;
+	self->tmpcontext = NULL;
+	self->recordhtab = NULL;
+	self->mode = TUPLE_QUEUE_MODE_DATA;
+	self->remapinfo = NULL;
 
 	return (DestReceiver *) self;
 }
 
 /*
- * Create a tuple queue funnel.
+ * Create a tuple queue reader.
  */
-TupleQueueFunnel *
-CreateTupleQueueFunnel(void)
+TupleQueueReader *
+CreateTupleQueueReader(shm_mq_handle *handle, TupleDesc tupledesc)
 {
-	TupleQueueFunnel *funnel = palloc0(sizeof(TupleQueueFunnel));
+	TupleQueueReader *reader = palloc0(sizeof(TupleQueueReader));
 
-	funnel->maxqueues = 8;
-	funnel->queue = palloc(funnel->maxqueues * sizeof(shm_mq_handle *));
+	reader->queue = handle;
+	reader->mode = TUPLE_QUEUE_MODE_DATA;
+	reader->tupledesc = tupledesc;
+	reader->remapinfo = BuildRemapInfo(tupledesc);
 
-	return funnel;
+	return reader;
 }
 
 /*
- * Destroy a tuple queue funnel.
+ * Destroy a tuple queue reader.
  */
 void
-DestroyTupleQueueFunnel(TupleQueueFunnel *funnel)
+DestroyTupleQueueReader(TupleQueueReader *reader)
 {
-	int			i;
+	shm_mq_detach(shm_mq_get_queue(reader->queue));
+	if (reader->remapinfo != NULL)
+		pfree(reader->remapinfo);
+	pfree(reader);
+}
+
+/*
+ * Fetch a tuple from a tuple queue reader.
+ *
+ * Even when shm_mq_receive() returns SHM_MQ_WOULD_BLOCK, this can still
+ * accumulate bytes from a partially-read message, so it's useful to call
+ * this with nowait = true even if nothing is returned.
+ *
+ * The return value is NULL if there are no remaining queues or if
+ * nowait = true and no tuple is ready to return.  *done, if not NULL,
+ * is set to true when queue is detached and otherwise to false.
+ */
+HeapTuple
+TupleQueueReaderNext(TupleQueueReader *reader, bool nowait, bool *done)
+{
+	shm_mq_result result;
+
+	if (done != NULL)
+		*done = false;
+
+	for (;;)
+	{
+		Size		nbytes;
+		void	   *data;
+
+		/* Attempt to read a message. */
+		result = shm_mq_receive(reader->queue, &nbytes, &data, true);
+
+		/* If queue is detached, set *done and return NULL. */
+		if (result == SHM_MQ_DETACHED)
+		{
+			if (done != NULL)
+				*done = true;
+			return NULL;
+		}
+
+		/* In non-blocking mode, bail out if no message ready yet. */
+		if (result == SHM_MQ_WOULD_BLOCK)
+			return NULL;
+		Assert(result == SHM_MQ_SUCCESS);
 
-	for (i = 0; i < funnel->nqueues; i++)
-		shm_mq_detach(shm_mq_get_queue(funnel->queue[i]));
-	pfree(funnel->queue);
-	pfree(funnel);
+		/*
+		 * OK, we got a message.  Process it.
+		 *
+		 * One-byte messages are mode switch messages, so that we can switch
+		 * between "control" and "data" mode.  When in "data" mode, each
+		 * message (unless exactly one byte) is a tuple.  When in "control"
+		 * mode, each message provides a transient-typmod-to-tupledesc mapping
+		 * so we can interpret future tuples.
+		 */
+		if (nbytes == 1)
+		{
+			/* Mode switch message. */
+			reader->mode = ((char *) data)[0];
+		}
+		else if (reader->mode == TUPLE_QUEUE_MODE_DATA)
+		{
+			/* Tuple data. */
+			return TupleQueueHandleDataMessage(reader, nbytes, data);
+		}
+		else if (reader->mode == TUPLE_QUEUE_MODE_CONTROL)
+		{
+			/* Control message, describing a transient record type. */
+			TupleQueueHandleControlMessage(reader, nbytes, data);
+		}
+		else
+			elog(ERROR, "invalid mode: %d", (int) reader->mode);
+	}
 }
 
 /*
- * Remember the shared memory queue handle in funnel.
+ * Handle a data message - that is, a tuple - from the remote side.
  */
-void
-RegisterTupleQueueOnFunnel(TupleQueueFunnel *funnel, shm_mq_handle *handle)
+static HeapTuple
+TupleQueueHandleDataMessage(TupleQueueReader *reader,
+							Size nbytes,
+							HeapTupleHeader data)
 {
-	if (funnel->nqueues < funnel->maxqueues)
+	HeapTupleData htup;
+
+	ItemPointerSetInvalid(&htup.t_self);
+	htup.t_tableOid = InvalidOid;
+	htup.t_len = nbytes;
+	htup.t_data = data;
+
+	return TupleQueueRemapTuple(reader, reader->tupledesc, reader->remapinfo,
+								&htup);
+}
+
+/*
+ * Remap tuple typmods per control information received from remote side.
+ */
+static HeapTuple
+TupleQueueRemapTuple(TupleQueueReader *reader, TupleDesc tupledesc,
+					 RemapInfo * remapinfo, HeapTuple tuple)
+{
+	Datum	   *values;
+	bool	   *isnull;
+	bool		dirty = false;
+	int			i;
+
+	/*
+	 * If no remapping is necessary, just copy the tuple into a single
+	 * palloc'd chunk, as caller will expect.
+	 */
+	if (remapinfo == NULL)
+		return heap_copytuple(tuple);
+
+	/* Deform tuple so we can remap record typmods for individual attrs. */
+	values = palloc(tupledesc->natts * sizeof(Datum));
+	isnull = palloc(tupledesc->natts * sizeof(bool));
+	heap_deform_tuple(tuple, tupledesc, values, isnull);
+	Assert(tupledesc->natts == remapinfo->natts);
+
+	/* Recursively check each non-NULL attribute. */
+	for (i = 0; i < tupledesc->natts; ++i)
 	{
-		funnel->queue[funnel->nqueues++] = handle;
-		return;
+		if (isnull[i] || remapinfo->mapping[i] == TQUEUE_REMAP_NONE)
+			continue;
+		values[i] = TupleQueueRemap(reader, remapinfo->mapping[i], values[i]);
+		dirty = true;
 	}
 
-	if (funnel->nqueues >= funnel->maxqueues)
+	/* Reform the modified tuple. */
+	return heap_form_tuple(tupledesc, values, isnull);
+}
+
+/*
+ * Remap a value based on the specified remap class.
+ */
+static Datum
+TupleQueueRemap(TupleQueueReader *reader, RemapClass remapclass, Datum value)
+{
+	check_stack_depth();
+
+	switch (remapclass)
 	{
-		int			newsize = funnel->nqueues * 2;
+		case TQUEUE_REMAP_NONE:
+			/* caller probably shouldn't have called us at all, but... */
+			return value;
+
+		case TQUEUE_REMAP_ARRAY:
+			return TupleQueueRemapArray(reader, value);
 
-		Assert(funnel->nqueues == funnel->maxqueues);
+		case TQUEUE_REMAP_RANGE:
+			return TupleQueueRemapRange(reader, value);
 
-		funnel->queue = repalloc(funnel->queue,
-								 newsize * sizeof(shm_mq_handle *));
-		funnel->maxqueues = newsize;
+		case TQUEUE_REMAP_RECORD:
+			return TupleQueueRemapRecord(reader, value);
 	}
+}
 
-	funnel->queue[funnel->nqueues++] = handle;
+/*
+ * Remap an array.
+ */
+static Datum
+TupleQueueRemapArray(TupleQueueReader *reader, Datum value)
+{
+	ArrayType  *arr = DatumGetArrayTypeP(value);
+	Oid			typeid = ARR_ELEMTYPE(arr);
+	RemapClass	remapclass;
+	int16		typlen;
+	bool		typbyval;
+	char		typalign;
+	Datum	   *elem_values;
+	bool	   *elem_nulls;
+	int			num_elems;
+	int			i;
+
+	remapclass = GetRemapClass(typeid);
+
+	/*
+	 * If the elements of the array don't need to be walked, we shouldn't have
+	 * been called in the first place: GetRemapClass should have returned NULL
+	 * when asked about this array type.
+	 */
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Deconstruct the array. */
+	get_typlenbyvalalign(typeid, &typlen, &typbyval, &typalign);
+	deconstruct_array(arr, typeid, typlen, typbyval, typalign,
+					  &elem_values, &elem_nulls, &num_elems);
+
+	/* Remap each element. */
+	for (i = 0; i < num_elems; ++i)
+		if (!elem_nulls[i])
+			elem_values[i] = TupleQueueRemap(reader, remapclass,
+											 elem_values[i]);
+
+	/* Reconstruct and return the array.  */
+	arr = construct_md_array(elem_values, elem_nulls,
+							 ARR_NDIM(arr), ARR_DIMS(arr), ARR_LBOUND(arr),
+							 typeid, typlen, typbyval, typalign);
+	return PointerGetDatum(arr);
 }
 
 /*
- * Fetch a tuple from a tuple queue funnel.
- *
- * We try to read from the queues in round-robin fashion so as to avoid
- * the situation where some workers get their tuples read expediently while
- * others are barely ever serviced.
- *
- * Even when nowait = false, we read from the individual queues in
- * non-blocking mode.  Even when shm_mq_receive() returns SHM_MQ_WOULD_BLOCK,
- * it can still accumulate bytes from a partially-read message, so doing it
- * this way should outperform doing a blocking read on each queue in turn.
- *
- * The return value is NULL if there are no remaining queues or if
- * nowait = true and no queue returned a tuple without blocking.  *done, if
- * not NULL, is set to true when there are no remaining queues and false in
- * any other case.
+ * Remap a range type.
  */
-HeapTuple
-TupleQueueFunnelNext(TupleQueueFunnel *funnel, bool nowait, bool *done)
+static Datum
+TupleQueueRemapRange(TupleQueueReader *reader, Datum value)
 {
-	int			waitpos = funnel->nextqueue;
+	RangeType  *range = DatumGetRangeType(value);
+	Oid			typeid = RangeTypeGetOid(range);
+	RemapClass	remapclass;
+	TypeCacheEntry *typcache;
+	RangeBound	lower;
+	RangeBound	upper;
+	bool		empty;
+
+	/*
+	 * Extract the lower and upper bounds.  As in tqueueWalkRange, some
+	 * caching might be a good idea here.
+	 */
+	typcache = lookup_type_cache(typeid, TYPECACHE_RANGE_INFO);
+	if (typcache->rngelemtype == NULL)
+		elog(ERROR, "type %u is not a range type", typeid);
+	range_deserialize(typcache, range, &lower, &upper, &empty);
+
+	/* Nothing to do for an empty range. */
+	if (empty)
+		return value;
+
+	/*
+	 * If the range bounds don't need to be walked, we shouldn't have been
+	 * called in the first place: GetRemapClass should have returned NULL when
+	 * asked about this range type.
+	 */
+	remapclass = GetRemapClass(typeid);
+	Assert(remapclass != TQUEUE_REMAP_NONE);
+
+	/* Remap each bound, if present. */
+	if (!upper.infinite)
+		upper.val = TupleQueueRemap(reader, remapclass, upper.val);
+	if (!lower.infinite)
+		lower.val = TupleQueueRemap(reader, remapclass, lower.val);
+
+	/* And reserialize. */
+	range = range_serialize(typcache, &lower, &upper, empty);
+	return RangeTypeGetDatum(range);
+}
 
-	/* Corner case: called before adding any queues, or after all are gone. */
-	if (funnel->nqueues == 0)
+/*
+ * Remap a record.
+ */
+static Datum
+TupleQueueRemapRecord(TupleQueueReader *reader, Datum value)
+{
+	HeapTupleHeader tup;
+	Oid			typeid;
+	int			typmod;
+	RecordTypemodMap *mapent;
+	TupleDesc	tupledesc;
+	RemapInfo  *remapinfo;
+	HeapTupleData htup;
+	HeapTuple	atup;
+
+	/* Fetch type OID and typemod. */
+	tup = DatumGetHeapTupleHeader(value);
+	typeid = HeapTupleHeaderGetTypeId(tup);
+	typmod = HeapTupleHeaderGetTypMod(tup);
+
+	/* If transient record, replace remote typmod with local typmod. */
+	if (typeid == RECORDOID)
 	{
-		if (done != NULL)
-			*done = true;
-		return NULL;
+		Assert(reader->typmodmap != NULL);
+		mapent = hash_search(reader->typmodmap, &typmod,
+							 HASH_FIND, NULL);
+		if (mapent == NULL)
+			elog(ERROR, "found unrecognized remote typmod %d", typmod);
+		typmod = mapent->localtypmod;
 	}
 
-	if (done != NULL)
-		*done = false;
+	/*
+	 * Fetch tupledesc and compute remap info.  We should probably cache this
+	 * so that we don't have to keep recomputing it.
+	 */
+	tupledesc = lookup_rowtype_tupdesc(typeid, typmod);
+	remapinfo = BuildRemapInfo(tupledesc);
+	DecrTupleDescRefCount(tupledesc);
+
+	/* Remap tuple. */
+	ItemPointerSetInvalid(&htup.t_self);
+	htup.t_tableOid = InvalidOid;
+	htup.t_len = HeapTupleHeaderGetDatumLength(tup);
+	htup.t_data = tup;
+	atup = TupleQueueRemapTuple(reader, tupledesc, remapinfo, &htup);
+	HeapTupleHeaderSetTypeId(atup->t_data, typeid);
+	HeapTupleHeaderSetTypMod(atup->t_data, typmod);
+	HeapTupleHeaderSetDatumLength(atup->t_data, htup.t_len);
+
+	/* And return the results. */
+	return HeapTupleHeaderGetDatum(atup->t_data);
+}
 
-	for (;;)
+/*
+ * Handle a control message from the tuple queue reader.
+ *
+ * Control messages are sent when the remote side is sending tuples that
+ * contain transient record types.  We need to arrange to bless those
+ * record types locally and translate between remote and local typmods.
+ */
+static void
+TupleQueueHandleControlMessage(TupleQueueReader *reader, Size nbytes,
+							   char *data)
+{
+	int			natts;
+	int			remotetypmod;
+	bool		hasoid;
+	char	   *buf = data;
+	int			rc = 0;
+	int			i;
+	Form_pg_attribute *attrs;
+	MemoryContext oldcontext;
+	TupleDesc	tupledesc;
+	RecordTypemodMap *mapent;
+	bool		found;
+
+	/* Extract remote typmod. */
+	memcpy(&remotetypmod, &buf[rc], sizeof(int));
+	rc += sizeof(int);
+
+	/* Extract attribute count. */
+	memcpy(&natts, &buf[rc], sizeof(int));
+	rc += sizeof(int);
+
+	/* Extract hasoid flag. */
+	memcpy(&hasoid, &buf[rc], sizeof(bool));
+	rc += sizeof(bool);
+
+	/* Extract attribute details. */
+	oldcontext = MemoryContextSwitchTo(CurTransactionContext);
+	attrs = palloc(natts * sizeof(Form_pg_attribute));
+	for (i = 0; i < natts; ++i)
 	{
-		shm_mq_handle *mqh = funnel->queue[funnel->nextqueue];
-		shm_mq_result result;
-		Size		nbytes;
-		void	   *data;
+		attrs[i] = palloc(sizeof(FormData_pg_attribute));
+		memcpy(attrs[i], &buf[rc], sizeof(FormData_pg_attribute));
+		rc += sizeof(FormData_pg_attribute);
+	}
+	MemoryContextSwitchTo(oldcontext);
 
-		/* Attempt to read a message. */
-		result = shm_mq_receive(mqh, &nbytes, &data, true);
+	/* We should have read the whole message. */
+	Assert(rc == nbytes);
 
-		/*
-		 * Normally, we advance funnel->nextqueue to the next queue at this
-		 * point, but if we're pointing to a queue that we've just discovered
-		 * is detached, then forget that queue and leave the pointer where it
-		 * is until the number of remaining queues fall below that pointer and
-		 * at that point make the pointer point to the first queue.
-		 */
-		if (result != SHM_MQ_DETACHED)
-			funnel->nextqueue = (funnel->nextqueue + 1) % funnel->nqueues;
-		else
+	/* Construct TupleDesc. */
+	tupledesc = CreateTupleDesc(natts, hasoid, attrs);
+	tupledesc = BlessTupleDesc(tupledesc);
+
+	/* Create map if it doesn't exist already. */
+	if (reader->typmodmap == NULL)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(int);
+		ctl.entrysize = sizeof(RecordTypemodMap);
+		ctl.hcxt = CurTransactionContext;
+		reader->typmodmap = hash_create("typmodmap hashtable",
+										100, &ctl, HASH_ELEM | HASH_CONTEXT);
+	}
+
+	/* Create map entry. */
+	mapent = hash_search(reader->typmodmap, &remotetypmod, HASH_ENTER,
+						 &found);
+	if (found)
+		elog(ERROR, "duplicate message for typmod %d",
+			 remotetypmod);
+	mapent->localtypmod = tupledesc->tdtypmod;
+	elog(DEBUG3, "mapping remote typmod %d to local typmod %d",
+		 remotetypmod, tupledesc->tdtypmod);
+}
+
+/*
+ * Build a mapping indicating what remapping class applies to each attribute
+ * described by a tupledesc.
+ */
+static RemapInfo *
+BuildRemapInfo(TupleDesc tupledesc)
+{
+	RemapInfo  *remapinfo;
+	Size		size;
+	AttrNumber	i;
+	bool		noop = true;
+	StringInfoData buf;
+
+	initStringInfo(&buf);
+
+	size = offsetof(RemapInfo, mapping) +
+		sizeof(RemapClass) * tupledesc->natts;
+	remapinfo = MemoryContextAllocZero(TopMemoryContext, size);
+	remapinfo->natts = tupledesc->natts;
+	for (i = 0; i < tupledesc->natts; ++i)
+	{
+		Form_pg_attribute attr = tupledesc->attrs[i];
+
+		if (attr->attisdropped)
 		{
-			--funnel->nqueues;
-			if (funnel->nqueues == 0)
-			{
-				if (done != NULL)
-					*done = true;
-				return NULL;
-			}
+			remapinfo->mapping[i] = TQUEUE_REMAP_NONE;
+			continue;
+		}
 
-			memmove(&funnel->queue[funnel->nextqueue],
-					&funnel->queue[funnel->nextqueue + 1],
-					sizeof(shm_mq_handle *)
-					* (funnel->nqueues - funnel->nextqueue));
+		remapinfo->mapping[i] = GetRemapClass(attr->atttypid);
+		if (remapinfo->mapping[i] != TQUEUE_REMAP_NONE)
+			noop = false;
+	}
+
+	if (noop)
+	{
+		appendStringInfo(&buf, "noop");
+		pfree(remapinfo);
+		remapinfo = NULL;
+	}
+
+	return remapinfo;
+}
+
+/*
+ * Determine the remap class assocociated with a particular data type.
+ *
+ * Transient record types need to have the typmod applied on the sending side
+ * replaced with a value on the receiving side that has the same meaning.
+ *
+ * Arrays, range types, and all record types (including named composite types)
+ * need to searched for transient record values buried within them.
+ * Surprisingly, a walker is required even when the indicated type is a
+ * composite type, because the actual value may be a compatible transient
+ * record type.
+ */
+static RemapClass
+GetRemapClass(Oid typeid)
+{
+	RemapClass	forceResult = TQUEUE_REMAP_NONE;
+	RemapClass	innerResult = TQUEUE_REMAP_NONE;
+
+	for (;;)
+	{
+		HeapTuple	tup;
+		Form_pg_type typ;
 
-			if (funnel->nextqueue >= funnel->nqueues)
-				funnel->nextqueue = 0;
+		/* Simple cases. */
+		if (typeid == RECORDOID)
+		{
+			innerResult = TQUEUE_REMAP_RECORD;
+			break;
+		}
+		if (typeid == RECORDARRAYOID)
+		{
+			innerResult = TQUEUE_REMAP_ARRAY;
+			break;
+		}
 
-			if (funnel->nextqueue < waitpos)
-				--waitpos;
+		/* Otherwise, we need a syscache lookup to figure it out. */
+		tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typeid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for type %u", typeid);
+		typ = (Form_pg_type) GETSTRUCT(tup);
 
+		/* Look through domains to underlying base type. */
+		if (typ->typtype == TYPTYPE_DOMAIN)
+		{
+			typeid = typ->typbasetype;
+			ReleaseSysCache(tup);
 			continue;
 		}
 
-		/* If we got a message, return it. */
-		if (result == SHM_MQ_SUCCESS)
+		/*
+		 * Look through arrays to underlying base type, but the final return
+		 * value must be either TQUEUE_REMAP_ARRAY or TQUEUE_REMAP_NONE.  (If
+		 * this is an array of integers, for example, we don't need to walk
+		 * it.)
+		 */
+		if (OidIsValid(typ->typelem) && typ->typlen == -1)
 		{
-			HeapTupleData htup;
-
-			/*
-			 * The tuple data we just read from the queue is only valid until
-			 * we again attempt to read from it.  Copy the tuple into a single
-			 * palloc'd chunk as callers will expect.
-			 */
-			ItemPointerSetInvalid(&htup.t_self);
-			htup.t_tableOid = InvalidOid;
-			htup.t_len = nbytes;
-			htup.t_data = data;
-			return heap_copytuple(&htup);
+			typeid = typ->typelem;
+			ReleaseSysCache(tup);
+			if (forceResult == TQUEUE_REMAP_NONE)
+				forceResult = TQUEUE_REMAP_ARRAY;
+			continue;
 		}
 
 		/*
-		 * If we've visited all of the queues, then we should either give up
-		 * and return NULL (if we're in non-blocking mode) or wait for the
-		 * process latch to be set (otherwise).
+		 * Similarly, look through ranges to the underlying base type, but the
+		 * final return value must be either TQUEUE_REMAP_RANGE or
+		 * TQUEUE_REMAP_NONE.
 		 */
-		if (funnel->nextqueue == waitpos)
+		if (typ->typtype == TYPTYPE_RANGE)
 		{
-			if (nowait)
-				return NULL;
-			WaitLatch(MyLatch, WL_LATCH_SET, 0);
-			CHECK_FOR_INTERRUPTS();
-			ResetLatch(MyLatch);
+			ReleaseSysCache(tup);
+			if (forceResult == TQUEUE_REMAP_NONE)
+				forceResult = TQUEUE_REMAP_RANGE;
+			typeid = get_range_subtype(typeid);
+			continue;
 		}
+
+		/* Walk composite types.  Nothing else needs special handling. */
+		if (typ->typtype == TYPTYPE_COMPOSITE)
+			innerResult = TQUEUE_REMAP_RECORD;
+		ReleaseSysCache(tup);
+		break;
 	}
+
+	if (innerResult != TQUEUE_REMAP_NONE && forceResult != TQUEUE_REMAP_NONE)
+		return forceResult;
+	return innerResult;
 }
diff --git a/src/include/executor/tqueue.h b/src/include/executor/tqueue.h
index 6f8eb73..6a668fa 100644
--- a/src/include/executor/tqueue.h
+++ b/src/include/executor/tqueue.h
@@ -21,11 +21,11 @@
 extern DestReceiver *CreateTupleQueueDestReceiver(shm_mq_handle *handle);
 
 /* Use these to receive tuples from a shm_mq. */
-typedef struct TupleQueueFunnel TupleQueueFunnel;
-extern TupleQueueFunnel *CreateTupleQueueFunnel(void);
-extern void DestroyTupleQueueFunnel(TupleQueueFunnel *funnel);
-extern void RegisterTupleQueueOnFunnel(TupleQueueFunnel *, shm_mq_handle *);
-extern HeapTuple TupleQueueFunnelNext(TupleQueueFunnel *, bool nowait,
-					 bool *done);
+typedef struct TupleQueueReader TupleQueueReader;
+extern TupleQueueReader *CreateTupleQueueReader(shm_mq_handle *handle,
+					   TupleDesc tupledesc);
+extern void DestroyTupleQueueReader(TupleQueueReader *funnel);
+extern HeapTuple TupleQueueReaderNext(TupleQueueReader *,
+					 bool nowait, bool *done);
 
 #endif   /* TQUEUE_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 939bc0e..58ec889 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1963,7 +1963,9 @@ typedef struct GatherState
 	PlanState	ps;				/* its first field is NodeTag */
 	bool		initialized;
 	struct ParallelExecutorInfo *pei;
-	struct TupleQueueFunnel *funnel;
+	int			nreaders;
+	int			nextreader;
+	struct TupleQueueReader **reader;
 	TupleTableSlot *funnel_slot;
 	bool		need_to_scan_locally;
 } GatherState;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index feb821b..03e1d2c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2018,7 +2018,7 @@ TupleHashEntry
 TupleHashEntryData
 TupleHashIterator
 TupleHashTable
-TupleQueueFunnel
+TupleQueueReader
 TupleTableSlot
 Tuplesortstate
 Tuplestorestate
-- 
2.3.8 (Apple Git-58)

#17Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#16)
Re: a raft of parallelism-related bug fixes

On Mon, Nov 2, 2015 at 9:29 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Oct 28, 2015 at 10:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sun, Oct 18, 2015 at 12:17 AM, Robert Haas <robertmhaas@gmail.com> wrote:

So reviewing patch 13 isn't possible without prior knowledge.

The basic question for patch 13 is whether ephemeral record types can
occur in executor tuples in any contexts that I haven't identified. I
know that a tuple table slot can contain have a column that is of type
record or record[], and those records can themselves contain
attributes of type record or record[], and so on as far down as you
like. I *think* that's the only case. For example, I don't believe
that a TupleTableSlot can contain a *named* record type that has an
anonymous record buried down inside of it somehow. But I'm not
positive I'm right about that.

I have done some more testing and investigation and determined that
this optimism was unwarranted. It turns out that the type information
for composite and record types gets stored in two different places.
First, the TupleTableSlot has a type OID, indicating the sort of the
value it expects to be stored for that slot attribute. Second, the
value itself contains a type OID and typmod. And these don't have to
match. For example, consider this query:

select row_to_json(i) from int8_tbl i(x,y);

Without i(x,y), the HeapTuple passed to row_to_json is labelled with
the pg_type OID of int8_tbl. But with the query as written, it's
labeled as an anonymous record type. If I jigger things by hacking
the code so that this is planned as Gather (single-copy) -> SeqScan,
with row_to_json evaluated at the Gather node, then the sequential
scan kicks out a tuple with a transient record type and stores it into
a slot whose type OID is still that of int8_tbl. My previous patch
failed to deal with that; the attached one does.

The previous patch was also defective in a few other respects. The
most significant of those, maybe, is that it somehow thought it was OK
to assume that transient typmods from all workers could be treated
interchangeably rather than individually. To fix this, I've changed
the TupleQueueFunnel implemented by tqueue.c to be merely a
TupleQueueReader which handles reading from a single worker only.
nodeGather.c therefore creates one TupleQueueReader per worker instead
of a single TupleQueueFunnel for all workers; accordingly, the logic
for multiplexing multiple queues now lives in nodeGather.c. This is
probably how I should have done it originally - someone, I think Jeff
Davis - complained previously that tqueue.c had no business embedding
the round-robin policy decision, and he was right. So this addresses
that complaint as well.

Here is an updated version. This is rebased over recent commits, and
I added a missing check for attisdropped.

Committed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#8)
3 attachment(s)
Re: a raft of parallelism-related bug fixes

On Mon, Oct 19, 2015 at 12:02 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sat, Oct 17, 2015 at 9:16 PM, Andrew Dunstan <andrew@dunslane.net> wrote:

If all that is required is a #define, like CLOBBER_CACHE_ALWAYS, then no
special buildfarm support is required - you would just add that to the
animal's config file, more or less like this:

config_env =>
{
CPPFLAGS => '-DGRATUITOUSLY_PARALLEL',
},

I try to make things easy :-)

Wow, that's great. So, I'll try to rework the test code I posted
previously into something less hacky, and eventually add a #define
like this so we can run it on the buildfarm. There's a few other
things that need to get done before that really makes sense - like
getting the rest of the bug fix patches committed - otherwise any
buildfarm critters we add will just be permanently red.

OK, so after a bit more delay than I would have liked, I now have a
working set of patches that we can use to ensure automated testing of
the parallel mode infrastructure. I ended up doing something that
does not require a #define, so I'll need some guidance on what to do
on the BF side given that context. Please find attached three
patches, two of them for commit.

group-locking-v1.patch is a vastly improved version of the group
locking patch that we discussed, uh, extensively last year. I realize
that there was a lot of doubt about this approach, but I still believe
it's the right approach, I have put a lot of work into making it work
correctly, I don't think anyone has come up with a really plausible
alternative approach (except one other approach I tried which turned
out to work but with significantly more restrictions), and I'm
committed to fixing it in whatever way is necessary if it turns out to
be broken, even if that amounts to a full rewrite. Review is welcome,
but I honestly believe it's a good idea to get this into the tree
sooner rather than later at this point, because automated regression
testing falls to pieces without these changes, and I believe that
automated regression testing is a really good idea to shake out
whatever bugs we may have in the parallel query stuff. The code in
this patch is all mine, but Amit Kapila deserves credit as co-author
for doing a lot of prototyping (that ended up getting tossed) and
testing. This patch includes comments and an addition to
src/backend/storage/lmgr/README which explain in more detail what this
patch does, how it does it, and why that's OK.

force-parallel-mode-v1.patch is what adds the actual infrastructure
for automated testing. You can set force_parallel_mode=on to force
queries to be ru in a worker whenever possible; this can help test
whether your user-defined functions have been erroneously labeled as
PARALLEL SAFE. If they error out or misbehave with this setting
enabled, you should label them PARALLEL RESTRICTED or PARALLEL UNSAFE.
If you set force_parallel_mode=regress, then some additional changes
intended specifically for regression testing kick in; those changes
are intended to ensure that you get exactly the same output from
running the regression tests with the parallelism infrastructure
forcibly enabled that you would have gotten anyway. Most of this code
is mine, but there are also contributions from Amit Kapila and Rushabh
Lathia.

With both of these patches, you can create a file that says:

force_parallel_mode=regress
max_parallel_degree=2

Then you can run: make check-world TEMP_CONFIG=/path/to/aforementioned/file

If you do, you'll find that while the core regression tests pass
(whee!) the pg_upgrade regression tests fail (oops) because of a
pre-existing bug in the parallelism code introduced by neither of
these two patches. I'm not exactly sure how to fix that bug yet - I
have a couple of ideas - but I think the fact that this test code
promptly found a bug is good sign that it provides enough test
coverage to be useful. Sticking a Gather node on top of every query
where it looks safe just turns out to exercise a lot of things: the
code that decides whether it's safe to put that Gather node, the code
to launch and manage parallel workers, the code those workers
themselves run, etc. The point is just to force as much of the
parallel code to be used as possible even when it's not expected to
make anything faster.

test-group-locking-v1.patch is useful for testing possible deadlock
scenarios with the group locking patch. It's not otherwise safe to
use this, like, at all, and the patch is not proposed for commit.
This patch is entirely by Amit Kapila.

In addition to what's in these patches, I'd like to add a new chapter
to the documentation explaining which queries can be parallelized and
in what ways, what the restrictions are that keep parallel query from
getting used, and some high-level details of how parallelism "works"
in PostgreSQL from a user perspective. Things will obviously change
here as we get more capabilities, but I think we're at a point where
it makes sense to start putting this together. What I'm less clear
about is where exactly in the current SGML documentation such a new
chapter might fit; suggestions very welcome.

Thanks,

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

group-locking-v1.patchapplication/x-patch; name=group-locking-v1.patchDownload
From fec950b2d1e1686defb950ce95763b107bd2f656 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Sat, 3 Oct 2015 13:34:35 -0400
Subject: [PATCH 1/3] Introduce group locking to prevent parallel processes
 from deadlocking.

For locking purposes, we now regard heavyweight locks as mutually
non-conflicting between cooperating parallel processes.  There are some
possible pitfalls to this approach that are not to be taken lightly,
but it works OK for now and can be changed later if we find a better
approach.  Without this, it's very easy for parallel queries to
silently self-deadlock if the user backend holds strong relation locks.

Robert Haas, with help from Amit Kapila.
---
 src/backend/access/transam/parallel.c |  16 ++
 src/backend/storage/lmgr/README       |  63 ++++++++
 src/backend/storage/lmgr/deadlock.c   | 279 +++++++++++++++++++++++++++-------
 src/backend/storage/lmgr/lock.c       | 122 ++++++++++++---
 src/backend/storage/lmgr/proc.c       | 158 ++++++++++++++++++-
 src/include/storage/lock.h            |  13 +-
 src/include/storage/proc.h            |  12 ++
 7 files changed, 587 insertions(+), 76 deletions(-)

diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 8eea092..bf2e691 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -432,6 +432,9 @@ LaunchParallelWorkers(ParallelContext *pcxt)
 	if (pcxt->nworkers == 0)
 		return;
 
+	/* We need to be a lock group leader. */
+	BecomeLockGroupLeader();
+
 	/* If we do have workers, we'd better have a DSM segment. */
 	Assert(pcxt->seg != NULL);
 
@@ -952,6 +955,19 @@ ParallelWorkerMain(Datum main_arg)
 	 */
 
 	/*
+	 * Join locking group.  We must do this before anything that could try
+	 * to acquire a heavyweight lock, because any heavyweight locks acquired
+	 * to this point could block either directly against the parallel group
+	 * leader or against some process which in turn waits for a lock that
+	 * conflicts with the parallel group leader, causing an undetected
+	 * deadlock.  (If we can't join the lock group, the leader has gone away,
+	 * so just exit quietly.)
+	 */
+	if (!BecomeLockGroupMember(fps->parallel_master_pgproc,
+							   fps->parallel_master_pid))
+		return;
+
+	/*
 	 * Load libraries that were loaded by original backend.  We want to do
 	 * this before restoring GUCs, because the libraries might define custom
 	 * variables.
diff --git a/src/backend/storage/lmgr/README b/src/backend/storage/lmgr/README
index 8898e25..cb9c7d6 100644
--- a/src/backend/storage/lmgr/README
+++ b/src/backend/storage/lmgr/README
@@ -586,6 +586,69 @@ The caller can then send a cancellation signal.  This implements the
 principle that autovacuum has a low locking priority (eg it must not block
 DDL on the table).
 
+Group Locking
+-------------
+
+As if all of that weren't already complicated enough, PostgreSQL now supports
+parallelism (see src/backend/access/transam/README.parallel), which means that
+we might need to resolve deadlocks that occur between gangs of related processes
+rather than individual processes.  This doesn't change the basic deadlock
+detection algorithm very much, but it makes the bookkeeping more complicated.
+
+We choose to regard locks held by processes in the same parallel group as
+non-conflicting.  This means that two processes in a parallel group can hold
+a self-exclusive lock on the same relation at the same time, or one process
+can acquire an AccessShareLock while the other already holds AccessExclusiveLock.
+This might seem dangerous and could be in some cases (more on that below), but
+if we didn't do this then parallel query would be extremely prone to
+self-deadlock.  For example, a parallel query against a relation on which the
+leader had already AccessExclusiveLock would hang, because the workers would
+try to lock the same relation and be blocked by the leader; yet the leader can't
+finish until it receives completion indications from all workers.  An undetected
+deadlock results.  This is far from the only scenario where such a problem
+happens.  The same thing will occur if the leader holds only AccessShareLock,
+the worker seeks AccessShareLock, but between the time the leader attempts to
+acquire the lock and the time the worker attempts to acquire it, some other
+process queues up waiting for an AccessExclusiveLock.  In this case, too, an
+indefinite hang results.
+
+It might seem that we could predict which locks the workers will attempt to
+acquire and ensure before going parallel that those locks would be acquired
+successfully.  But this is very difficult to make work in a general way.  For
+example, a parallel worker's portion of the query plan could involve an
+SQL-callable function which generates a query dynamically, and that query
+might happen to hit a table on which the leader happens to hold
+AccessExcusiveLock.  By imposing enough restrictions on what workers can do,
+we could eventually create a situation where their behavior can be adequately
+restricted, but these restrictions would be fairly onerous, and even then, the
+system required to decide whether the workers will succeed at acquiring the
+necessary locks would be complex and possibly buggy.
+
+So, instead, we take the approach of deciding that locks within a lock group
+do not conflict.  This eliminates the possibility of an undetected deadlock,
+but also opens up some problem cases: if the leader and worker try to do some
+operation at the same time which would ordinarily be prevented by the heavyweight
+lock mechanism, undefined behavior might result.  In practice, the dangers are
+modest.  The leader and worker share the same transaction, snapshot, and combo
+CID hash, and neither can perform any DDL or, indeed, write any data at all.
+Thus, for either to read a table locked exclusively by the other is safe enough.
+Problems would occur if the leader initiated parallelism from a point in the
+code at which it had some backend-private state that made table access from
+another process unsafe, for example after calling SetReindexProcessing and
+before calling ResetReindexProcessing, catastrophe could ensue, because the
+worker won't have that state.  Similarly, problems could occur with certain
+kinds of non-relation locks, such as relation extension locks.  It's no safer
+for two related processes to extend the same relation at the time than for
+unrelated processes to do the same.  However, since parallel mode is strictly
+read-only at present, neither this nor most of the similar cases can arise at
+present.  To allow parallel writes, we'll either need to (1) further enhance
+the deadlock detector to handle those types of locks in a different way than
+other types; or (2) have parallel workers use some other mutual exclusion
+method for such cases; or (3) revise those cases so that they no longer use
+heavyweight locking in the first place (which is not a crazy idea, given that
+such lock acquisitions are not expected to deadlock and that heavyweight lock
+acquisition is fairly slow anyway).
+
 User Locks (Advisory Locks)
 ---------------------------
 
diff --git a/src/backend/storage/lmgr/deadlock.c b/src/backend/storage/lmgr/deadlock.c
index a68aaf6..69f678b 100644
--- a/src/backend/storage/lmgr/deadlock.c
+++ b/src/backend/storage/lmgr/deadlock.c
@@ -38,6 +38,7 @@ typedef struct
 {
 	PGPROC	   *waiter;			/* the waiting process */
 	PGPROC	   *blocker;		/* the process it is waiting for */
+	LOCK	   *lock;			/* the lock it is waiting for */
 	int			pred;			/* workspace for TopoSort */
 	int			link;			/* workspace for TopoSort */
 } EDGE;
@@ -72,6 +73,9 @@ static bool FindLockCycle(PGPROC *checkProc,
 			  EDGE *softEdges, int *nSoftEdges);
 static bool FindLockCycleRecurse(PGPROC *checkProc, int depth,
 					 EDGE *softEdges, int *nSoftEdges);
+static bool FindLockCycleRecurseMember(PGPROC *checkProc,
+						   PGPROC *checkProcLeader,
+						   int depth, EDGE *softEdges, int *nSoftEdges);
 static bool ExpandConstraints(EDGE *constraints, int nConstraints);
 static bool TopoSort(LOCK *lock, EDGE *constraints, int nConstraints,
 		 PGPROC **ordering);
@@ -449,18 +453,15 @@ FindLockCycleRecurse(PGPROC *checkProc,
 					 EDGE *softEdges,	/* output argument */
 					 int *nSoftEdges)	/* output argument */
 {
-	PGPROC	   *proc;
-	PGXACT	   *pgxact;
-	LOCK	   *lock;
-	PROCLOCK   *proclock;
-	SHM_QUEUE  *procLocks;
-	LockMethod	lockMethodTable;
-	PROC_QUEUE *waitQueue;
-	int			queue_size;
-	int			conflictMask;
 	int			i;
-	int			numLockModes,
-				lm;
+	dlist_iter	iter;
+
+	/*
+	 * If this process is a lock group member, check the leader instead. (Note
+	 * that we might be the leader, in which case this is a no-op.)
+	 */
+	if (checkProc->lockGroupLeader != NULL)
+		checkProc = checkProc->lockGroupLeader;
 
 	/*
 	 * Have we already seen this proc?
@@ -494,13 +495,57 @@ FindLockCycleRecurse(PGPROC *checkProc,
 	visitedProcs[nVisitedProcs++] = checkProc;
 
 	/*
-	 * If the proc is not waiting, we have no outgoing waits-for edges.
+	 * If the process is waiting, there is an outgoing waits-for edge to each
+	 * process that blocks it.
+	 */
+	if (checkProc->links.next != NULL && checkProc->waitLock != NULL &&
+		FindLockCycleRecurseMember(checkProc, checkProc, depth, softEdges,
+								   nSoftEdges))
+		return true;
+
+	/*
+	 * If the process is not waiting, there could still be outgoing waits-for
+	 * edges if it is part of a lock group, because other members of the lock
+	 * group might be waiting even though this process is not.  (Given lock
+	 * groups {A1, A2} and {B1, B2}, if A1 waits for B1 and B2 waits for A2,
+	 * that is a deadlock even neither of B1 and A2 are waiting for anything.)
 	 */
-	if (checkProc->links.next == NULL)
-		return false;
-	lock = checkProc->waitLock;
-	if (lock == NULL)
-		return false;
+	dlist_foreach(iter, &checkProc->lockGroupMembers)
+	{
+		PGPROC	   *memberProc;
+
+		memberProc = dlist_container(PGPROC, lockGroupLink, iter.cur);
+
+		if (memberProc->links.next != NULL && memberProc->waitLock != NULL &&
+			memberProc != checkProc &&
+		  FindLockCycleRecurseMember(memberProc, checkProc, depth, softEdges,
+									 nSoftEdges))
+			return true;
+	}
+
+	return false;
+}
+
+static bool
+FindLockCycleRecurseMember(PGPROC *checkProc,
+						   PGPROC *checkProcLeader,
+						   int depth,
+						   EDGE *softEdges,		/* output argument */
+						   int *nSoftEdges)		/* output argument */
+{
+	PGPROC	   *proc;
+	LOCK	   *lock = checkProc->waitLock;
+	PGXACT	   *pgxact;
+	PROCLOCK   *proclock;
+	SHM_QUEUE  *procLocks;
+	LockMethod	lockMethodTable;
+	PROC_QUEUE *waitQueue;
+	int			queue_size;
+	int			conflictMask;
+	int			i;
+	int			numLockModes,
+				lm;
+
 	lockMethodTable = GetLocksMethodTable(lock);
 	numLockModes = lockMethodTable->numLockModes;
 	conflictMask = lockMethodTable->conflictTab[checkProc->waitLockMode];
@@ -516,11 +561,14 @@ FindLockCycleRecurse(PGPROC *checkProc,
 
 	while (proclock)
 	{
+		PGPROC	   *leader;
+
 		proc = proclock->tag.myProc;
 		pgxact = &ProcGlobal->allPgXact[proc->pgprocno];
+		leader = proc->lockGroupLeader == NULL ? proc : proc->lockGroupLeader;
 
-		/* A proc never blocks itself */
-		if (proc != checkProc)
+		/* A proc never blocks itself or any other lock group member */
+		if (leader != checkProcLeader)
 		{
 			for (lm = 1; lm <= numLockModes; lm++)
 			{
@@ -601,10 +649,20 @@ FindLockCycleRecurse(PGPROC *checkProc,
 
 		for (i = 0; i < queue_size; i++)
 		{
+			PGPROC	   *leader;
+
 			proc = procs[i];
+			leader = proc->lockGroupLeader == NULL ? proc :
+				proc->lockGroupLeader;
 
-			/* Done when we reach the target proc */
-			if (proc == checkProc)
+			/*
+			 * TopoSort will always return an ordering with group members
+			 * adjacent to each other in the wait queue (see comments
+			 * therein). So, as soon as we reach a process in the same lock
+			 * group as checkProc, we know we've found all the conflicts that
+			 * precede any member of the lock group lead by checkProcLeader.
+			 */
+			if (leader == checkProcLeader)
 				break;
 
 			/* Is there a conflict with this guy's request? */
@@ -625,8 +683,9 @@ FindLockCycleRecurse(PGPROC *checkProc,
 					 * Add this edge to the list of soft edges in the cycle
 					 */
 					Assert(*nSoftEdges < MaxBackends);
-					softEdges[*nSoftEdges].waiter = checkProc;
-					softEdges[*nSoftEdges].blocker = proc;
+					softEdges[*nSoftEdges].waiter = checkProcLeader;
+					softEdges[*nSoftEdges].blocker = leader;
+					softEdges[*nSoftEdges].lock = lock;
 					(*nSoftEdges)++;
 					return true;
 				}
@@ -635,20 +694,52 @@ FindLockCycleRecurse(PGPROC *checkProc,
 	}
 	else
 	{
+		PGPROC	   *lastGroupMember = NULL;
+
 		/* Use the true lock wait queue order */
 		waitQueue = &(lock->waitProcs);
-		queue_size = waitQueue->size;
 
-		proc = (PGPROC *) waitQueue->links.next;
+		/*
+		 * Find the last member of the lock group that is present in the wait
+		 * queue.  Anything after this is not a soft lock conflict. If group
+		 * locking is not in use, then we know immediately which process we're
+		 * looking for, but otherwise we've got to search the wait queue to
+		 * find the last process actually present.
+		 */
+		if (checkProc->lockGroupLeader == NULL)
+			lastGroupMember = checkProc;
+		else
+		{
+			proc = (PGPROC *) waitQueue->links.next;
+			queue_size = waitQueue->size;
+			while (queue_size-- > 0)
+			{
+				if (proc->lockGroupLeader == checkProcLeader)
+					lastGroupMember = proc;
+				proc = (PGPROC *) proc->links.next;
+			}
+			Assert(lastGroupMember != NULL);
+		}
 
+		/*
+		 * OK, now rescan (or scan) the queue to identify the soft conflicts.
+		 */
+		queue_size = waitQueue->size;
+		proc = (PGPROC *) waitQueue->links.next;
 		while (queue_size-- > 0)
 		{
+			PGPROC	   *leader;
+
+			leader = proc->lockGroupLeader == NULL ? proc :
+				proc->lockGroupLeader;
+
 			/* Done when we reach the target proc */
-			if (proc == checkProc)
+			if (proc == lastGroupMember)
 				break;
 
 			/* Is there a conflict with this guy's request? */
-			if ((LOCKBIT_ON(proc->waitLockMode) & conflictMask) != 0)
+			if ((LOCKBIT_ON(proc->waitLockMode) & conflictMask) != 0 &&
+				leader != checkProcLeader)
 			{
 				/* This proc soft-blocks checkProc */
 				if (FindLockCycleRecurse(proc, depth + 1,
@@ -665,8 +756,9 @@ FindLockCycleRecurse(PGPROC *checkProc,
 					 * Add this edge to the list of soft edges in the cycle
 					 */
 					Assert(*nSoftEdges < MaxBackends);
-					softEdges[*nSoftEdges].waiter = checkProc;
-					softEdges[*nSoftEdges].blocker = proc;
+					softEdges[*nSoftEdges].waiter = checkProcLeader;
+					softEdges[*nSoftEdges].blocker = leader;
+					softEdges[*nSoftEdges].lock = lock;
 					(*nSoftEdges)++;
 					return true;
 				}
@@ -711,8 +803,7 @@ ExpandConstraints(EDGE *constraints,
 	 */
 	for (i = nConstraints; --i >= 0;)
 	{
-		PGPROC	   *proc = constraints[i].waiter;
-		LOCK	   *lock = proc->waitLock;
+		LOCK	   *lock = constraints[i].lock;
 
 		/* Did we already make a list for this lock? */
 		for (j = nWaitOrders; --j >= 0;)
@@ -778,7 +869,9 @@ TopoSort(LOCK *lock,
 	PGPROC	   *proc;
 	int			i,
 				j,
+				jj,
 				k,
+				kk,
 				last;
 
 	/* First, fill topoProcs[] array with the procs in their current order */
@@ -798,41 +891,95 @@ TopoSort(LOCK *lock,
 	 * stores its list link in constraints[i].link (note any constraint will
 	 * be in just one list). The array index for the before-proc of the i'th
 	 * constraint is remembered in constraints[i].pred.
+	 *
+	 * Note that it's not necessarily the case that every constraint affects
+	 * this particular wait queue.  Prior to group locking, a process could be
+	 * waiting for at most one lock.  But a lock group can be waiting for
+	 * zero, one, or multiple locks.  Since topoProcs[] is an array of the
+	 * processes actually waiting, while constraints[] is an array of group
+	 * leaders, we've got to scan through topoProcs[] for each constraint,
+	 * checking whether both a waiter and a blocker for that group are
+	 * present.  If so, the constraint is relevant to this wait queue; if not,
+	 * it isn't.
 	 */
 	MemSet(beforeConstraints, 0, queue_size * sizeof(int));
 	MemSet(afterConstraints, 0, queue_size * sizeof(int));
 	for (i = 0; i < nConstraints; i++)
 	{
+		/*
+		 * Find a representative process that is on the lock queue and part of
+		 * the waiting lock group.  This may or may not be the leader, which
+		 * may or may not be waiting at all.  If there are any other processes
+		 * in the same lock group on the queue, set their number of
+		 * beforeConstraints to -1 to indicate that they should be emitted
+		 * with their groupmates rather than considered separately.
+		 */
 		proc = constraints[i].waiter;
-		/* Ignore constraint if not for this lock */
-		if (proc->waitLock != lock)
-			continue;
-		/* Find the waiter proc in the array */
+		Assert(proc != NULL);
+		jj = -1;
 		for (j = queue_size; --j >= 0;)
 		{
-			if (topoProcs[j] == proc)
+			PGPROC	   *waiter = topoProcs[j];
+
+			if (waiter == proc || waiter->lockGroupLeader == proc)
+			{
+				Assert(waiter->waitLock == lock);
+				if (jj == -1)
+					jj = j;
+				else
+				{
+					Assert(beforeConstraints[j] <= 0);
+					beforeConstraints[j] = -1;
+				}
 				break;
+			}
 		}
-		Assert(j >= 0);			/* should have found a match */
-		/* Find the blocker proc in the array */
+
+		/* If no matching waiter, constraint is not relevant to this lock. */
+		if (jj < 0)
+			continue;
+
+		/*
+		 * Similarly, find a representative process that is on the lock queue
+		 * and waiting for the blocking lock group.  Again, this could be the
+		 * leader but does not need to be.
+		 */
 		proc = constraints[i].blocker;
+		Assert(proc != NULL);
+		kk = -1;
 		for (k = queue_size; --k >= 0;)
 		{
-			if (topoProcs[k] == proc)
-				break;
+			PGPROC	   *blocker = topoProcs[k];
+
+			if (blocker == proc || blocker->lockGroupLeader == proc)
+			{
+				Assert(blocker->waitLock == lock);
+				if (kk == -1)
+					kk = k;
+				else
+				{
+					Assert(beforeConstraints[k] <= 0);
+					beforeConstraints[k] = -1;
+				}
+			}
 		}
-		Assert(k >= 0);			/* should have found a match */
-		beforeConstraints[j]++; /* waiter must come before */
+
+		/* If no matching blocker, constraint is not relevant to this lock. */
+		if (kk < 0)
+			continue;
+
+		beforeConstraints[jj]++;	/* waiter must come before */
 		/* add this constraint to list of after-constraints for blocker */
-		constraints[i].pred = j;
-		constraints[i].link = afterConstraints[k];
-		afterConstraints[k] = i + 1;
+		constraints[i].pred = jj;
+		constraints[i].link = afterConstraints[kk];
+		afterConstraints[kk] = i + 1;
 	}
+
 	/*--------------------
 	 * Now scan the topoProcs array backwards.  At each step, output the
-	 * last proc that has no remaining before-constraints, and decrease
-	 * the beforeConstraints count of each of the procs it was constrained
-	 * against.
+	 * last proc that has no remaining before-constraints plus any other
+	 * members of the same lock group; then decrease the beforeConstraints
+	 * count of each of the procs it was constrained against.
 	 * i = index of ordering[] entry we want to output this time
 	 * j = search index for topoProcs[]
 	 * k = temp for scanning constraint list for proc j
@@ -840,8 +987,11 @@ TopoSort(LOCK *lock,
 	 *--------------------
 	 */
 	last = queue_size - 1;
-	for (i = queue_size; --i >= 0;)
+	for (i = queue_size - 1; i >= 0;)
 	{
+		int			c;
+		int			nmatches = 0;
+
 		/* Find next candidate to output */
 		while (topoProcs[last] == NULL)
 			last--;
@@ -850,12 +1000,37 @@ TopoSort(LOCK *lock,
 			if (topoProcs[j] != NULL && beforeConstraints[j] == 0)
 				break;
 		}
+
 		/* If no available candidate, topological sort fails */
 		if (j < 0)
 			return false;
-		/* Output candidate, and mark it done by zeroing topoProcs[] entry */
-		ordering[i] = topoProcs[j];
-		topoProcs[j] = NULL;
+
+		/*
+		 * Output everything in the lock group.  There's no point in outputing
+		 * an ordering where members of the same lock group are not
+		 * consecutive on the wait queue: if some other waiter is between two
+		 * requests that belong to the same group, then either it conflicts
+		 * with both of them and is certainly not a solution; or it conflicts
+		 * with at most one of them and is thus isomorphic to an ordering
+		 * where the group members are consecutive.
+		 */
+		proc = topoProcs[j];
+		if (proc->lockGroupLeader != NULL)
+			proc = proc->lockGroupLeader;
+		Assert(proc != NULL);
+		for (c = 0; c <= last; ++c)
+		{
+			if (topoProcs[c] == proc || (topoProcs[c] != NULL &&
+									  topoProcs[c]->lockGroupLeader == proc))
+			{
+				ordering[i - nmatches] = topoProcs[c];
+				topoProcs[c] = NULL;
+				++nmatches;
+			}
+		}
+		Assert(nmatches > 0);
+		i -= nmatches;
+
 		/* Update beforeConstraints counts of its predecessors */
 		for (k = afterConstraints[j]; k > 0; k = constraints[k - 1].link)
 			beforeConstraints[constraints[k - 1].pred]--;
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 269fe14..e3e9599 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -35,6 +35,7 @@
 #include "access/transam.h"
 #include "access/twophase.h"
 #include "access/twophase_rmgr.h"
+#include "access/xact.h"
 #include "access/xlog.h"
 #include "miscadmin.h"
 #include "pg_trace.h"
@@ -1136,6 +1137,18 @@ SetupLockInTable(LockMethod lockMethodTable, PGPROC *proc,
 	{
 		uint32		partition = LockHashPartition(hashcode);
 
+		/*
+		 * It might seem unsafe to access proclock->groupLeader without a lock,
+		 * but it's not really.  Either we are initializing a proclock on our
+		 * own behalf, in which case our group leader isn't changing because
+		 * the group leader for a process can only ever be changed by the
+		 * process itself; or else we are transferring a fast-path lock to the
+		 * main lock table, in which case that process can't change it's lock
+		 * group leader without first releasing all of its locks (and in
+		 * particular the one we are currently transferring).
+		 */
+		proclock->groupLeader = proc->lockGroupLeader != NULL ?
+			proc->lockGroupLeader : proc;
 		proclock->holdMask = 0;
 		proclock->releaseMask = 0;
 		/* Add proclock to appropriate lists */
@@ -1255,9 +1268,10 @@ RemoveLocalLock(LOCALLOCK *locallock)
  * NOTES:
  *		Here's what makes this complicated: one process's locks don't
  * conflict with one another, no matter what purpose they are held for
- * (eg, session and transaction locks do not conflict).
- * So, we must subtract off our own locks when determining whether the
- * requested new lock conflicts with those already held.
+ * (eg, session and transaction locks do not conflict).  Nor do the locks
+ * of one process in a lock group conflict with those of another process in
+ * the same group.  So, we must subtract off these locks when determining
+ * whether the requested new lock conflicts with those already held.
  */
 int
 LockCheckConflicts(LockMethod lockMethodTable,
@@ -1267,8 +1281,12 @@ LockCheckConflicts(LockMethod lockMethodTable,
 {
 	int			numLockModes = lockMethodTable->numLockModes;
 	LOCKMASK	myLocks;
-	LOCKMASK	otherLocks;
+	int			conflictMask = lockMethodTable->conflictTab[lockmode];
+	int			conflictsRemaining[MAX_LOCKMODES];
+	int			totalConflictsRemaining = 0;
 	int			i;
+	SHM_QUEUE  *procLocks;
+	PROCLOCK   *otherproclock;
 
 	/*
 	 * first check for global conflicts: If no locks conflict with my request,
@@ -1279,40 +1297,91 @@ LockCheckConflicts(LockMethod lockMethodTable,
 	 * type of lock that conflicts with request.   Bitwise compare tells if
 	 * there is a conflict.
 	 */
-	if (!(lockMethodTable->conflictTab[lockmode] & lock->grantMask))
+	if (!(conflictMask & lock->grantMask))
 	{
 		PROCLOCK_PRINT("LockCheckConflicts: no conflict", proclock);
 		return STATUS_OK;
 	}
 
 	/*
-	 * Rats.  Something conflicts.  But it could still be my own lock. We have
-	 * to construct a conflict mask that does not reflect our own locks, but
-	 * only lock types held by other processes.
+	 * Rats.  Something conflicts.  But it could still be my own lock, or
+	 * a lock held by another member of my locking group.  First, figure out
+	 * how many conflicts remain after subtracting out any locks I hold
+	 * myself.
 	 */
 	myLocks = proclock->holdMask;
-	otherLocks = 0;
 	for (i = 1; i <= numLockModes; i++)
 	{
-		int			myHolding = (myLocks & LOCKBIT_ON(i)) ? 1 : 0;
+		if ((conflictMask & LOCKBIT_ON(i)) == 0)
+		{
+			conflictsRemaining[i] = 0;
+			continue;
+		}
+		conflictsRemaining[i] = lock->granted[i];
+		if (myLocks & LOCKBIT_ON(i))
+			--conflictsRemaining[i];
+		totalConflictsRemaining += conflictsRemaining[i];
+	}
 
-		if (lock->granted[i] > myHolding)
-			otherLocks |= LOCKBIT_ON(i);
+	/* If no conflicts remain, we get the lock. */
+	if (totalConflictsRemaining == 0)
+	{
+		PROCLOCK_PRINT("LockCheckConflicts: resolved (simple)", proclock);
+		return STATUS_OK;
+	}
+
+	/* If no group locking, it's definitely a conflict. */
+	if (proclock->groupLeader == MyProc && MyProc->lockGroupLeader == NULL)
+	{
+		Assert(proclock->tag.myProc == MyProc);
+		PROCLOCK_PRINT("LockCheckConflicts: conflicting (simple)",
+					   proclock);
+		return STATUS_FOUND;
 	}
 
 	/*
-	 * now check again for conflicts.  'otherLocks' describes the types of
-	 * locks held by other processes.  If one of these conflicts with the kind
-	 * of lock that I want, there is a conflict and I have to sleep.
+	 * Locks held in conflicting modes by members of our own lock group are
+	 * not real conflicts; we can subtract those out and see if we still have
+	 * a conflict.  This is O(N) in the number of processes holding or awaiting
+	 * locks on this object.  We could improve that by making the shared memory
+	 * state more complex (and larger) but it doesn't seem worth it.
 	 */
-	if (!(lockMethodTable->conflictTab[lockmode] & otherLocks))
+	procLocks = &(lock->procLocks);
+	otherproclock = (PROCLOCK *)
+		SHMQueueNext(procLocks, procLocks, offsetof(PROCLOCK, lockLink));
+	while (otherproclock != NULL)
 	{
-		/* no conflict. OK to get the lock */
-		PROCLOCK_PRINT("LockCheckConflicts: resolved", proclock);
-		return STATUS_OK;
+		if (proclock != otherproclock &&
+			proclock->groupLeader == otherproclock->groupLeader &&
+			(otherproclock->holdMask & conflictMask) != 0)
+		{
+			int	intersectMask = otherproclock->holdMask & conflictMask;
+
+			for (i = 1; i <= numLockModes; i++)
+			{
+				if ((intersectMask & LOCKBIT_ON(i)) != 0)
+				{
+					if (conflictsRemaining[i] <= 0)
+						elog(PANIC, "proclocks held do not match lock");
+					conflictsRemaining[i]--;
+					totalConflictsRemaining--;
+				}
+			}
+
+			if (totalConflictsRemaining == 0)
+			{
+				PROCLOCK_PRINT("LockCheckConflicts: resolved (group)",
+							   proclock);
+				return STATUS_OK;
+			}
+		}
+		otherproclock = (PROCLOCK *)
+			SHMQueueNext(procLocks, &otherproclock->lockLink,
+						 offsetof(PROCLOCK, lockLink));
 	}
 
-	PROCLOCK_PRINT("LockCheckConflicts: conflicting", proclock);
+	/* Nope, it's a real conflict. */
+	PROCLOCK_PRINT("LockCheckConflicts: conflicting (group)", proclock);
 	return STATUS_FOUND;
 }
 
@@ -3095,6 +3164,10 @@ PostPrepare_Locks(TransactionId xid)
 	PROCLOCKTAG proclocktag;
 	int			partition;
 
+	/* Can't prepare a lock group follower. */
+	Assert(MyProc->lockGroupLeader == NULL ||
+		   MyProc->lockGroupLeader == MyProc);
+
 	/* This is a critical section: any error means big trouble */
 	START_CRIT_SECTION();
 
@@ -3239,6 +3312,13 @@ PostPrepare_Locks(TransactionId xid)
 			proclocktag.myProc = newproc;
 
 			/*
+			 * Update groupLeader pointer to point to the new proc.  (We'd
+			 * better not be a member of somebody else's lock group!)
+			 */
+			Assert(proclock->groupLeader == proclock->tag.myProc);
+			proclock->groupLeader = newproc;
+
+			/*
 			 * Update the proclock.  We should not find any existing entry for
 			 * the same hash key, since there can be only one entry for any
 			 * given lock with my own proc.
@@ -3785,6 +3865,8 @@ lock_twophase_recover(TransactionId xid, uint16 info,
 	 */
 	if (!found)
 	{
+		Assert(proc->lockGroupLeader == NULL);
+		proclock->groupLeader = proc;
 		proclock->holdMask = 0;
 		proclock->releaseMask = 0;
 		/* Add proclock to appropriate lists */
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 3690753..084be5a 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -263,6 +263,9 @@ InitProcGlobal(void)
 		/* Initialize myProcLocks[] shared memory queues. */
 		for (j = 0; j < NUM_LOCK_PARTITIONS; j++)
 			SHMQueueInit(&(procs[i].myProcLocks[j]));
+
+		/* Initialize lockGroupMembers list. */
+		dlist_init(&procs[i].lockGroupMembers);
 	}
 
 	/*
@@ -397,6 +400,11 @@ InitProcess(void)
 	MyProc->backendLatestXid = InvalidTransactionId;
 	pg_atomic_init_u32(&MyProc->nextClearXidElem, INVALID_PGPROCNO);
 
+	/* Check that group locking fields are in a proper initial state. */
+	Assert(MyProc->lockGroupLeaderIdentifier == 0);
+	Assert(MyProc->lockGroupLeader == NULL);
+	Assert(dlist_is_empty(&MyProc->lockGroupMembers));
+
 	/*
 	 * Acquire ownership of the PGPROC's latch, so that we can use WaitLatch
 	 * on it.  That allows us to repoint the process latch, which so far
@@ -556,6 +564,11 @@ InitAuxiliaryProcess(void)
 	OwnLatch(&MyProc->procLatch);
 	SwitchToSharedLatch();
 
+	/* Check that group locking fields are in a proper initial state. */
+	Assert(MyProc->lockGroupLeaderIdentifier == 0);
+	Assert(MyProc->lockGroupLeader == NULL);
+	Assert(dlist_is_empty(&MyProc->lockGroupMembers));
+
 	/*
 	 * We might be reusing a semaphore that belonged to a failed process. So
 	 * be careful and reinitialize its value here.  (This is not strictly
@@ -794,6 +807,40 @@ ProcKill(int code, Datum arg)
 		ReplicationSlotRelease();
 
 	/*
+	 * Detach from any lock group of which we are a member.  If the leader
+	 * exist before all other group members, it's PGPROC will remain allocated
+	 * until the last group process exits; that process must return the
+	 * leader's PGPROC to the appropriate list.
+	 */
+	if (MyProc->lockGroupLeader != NULL)
+	{
+		PGPROC	   *leader = MyProc->lockGroupLeader;
+		LWLock	   *leader_lwlock = LockHashPartitionLockByProc(leader);
+
+		LWLockAcquire(leader_lwlock, LW_EXCLUSIVE);
+		Assert(!dlist_is_empty(&leader->lockGroupMembers));
+		dlist_delete(&MyProc->lockGroupLink);
+		if (dlist_is_empty(&leader->lockGroupMembers))
+		{
+			leader->lockGroupLeaderIdentifier = 0;
+			leader->lockGroupLeader = NULL;
+			if (leader != MyProc)
+			{
+				procgloballist = leader->procgloballist;
+
+				/* Leader exited first; return its PGPROC. */
+				SpinLockAcquire(ProcStructLock);
+				leader->links.next = (SHM_QUEUE *) *procgloballist;
+				*procgloballist = leader;
+				SpinLockRelease(ProcStructLock);
+			}
+		}
+		else if (leader != MyProc)
+			MyProc->lockGroupLeader = NULL;
+		LWLockRelease(leader_lwlock);
+	}
+
+	/*
 	 * Reset MyLatch to the process local one.  This is so that signal
 	 * handlers et al can continue using the latch after the shared latch
 	 * isn't ours anymore. After that clear MyProc and disown the shared
@@ -807,9 +854,20 @@ ProcKill(int code, Datum arg)
 	procgloballist = proc->procgloballist;
 	SpinLockAcquire(ProcStructLock);
 
-	/* Return PGPROC structure (and semaphore) to appropriate freelist */
-	proc->links.next = (SHM_QUEUE *) *procgloballist;
-	*procgloballist = proc;
+	/*
+	 * If we're still a member of a locking group, that means we're a leader
+	 * which has somehow exited before its children.  The last remaining child
+	 * will release our PGPROC.  Otherwise, release it now.
+	 */
+	if (proc->lockGroupLeader == NULL)
+	{
+		/* Since lockGroupLeader is NULL, lockGroupMembers should be empty. */
+		Assert(dlist_is_empty(&proc->lockGroupMembers));
+
+		/* Return PGPROC structure (and semaphore) to appropriate freelist */
+		proc->links.next = (SHM_QUEUE *) *procgloballist;
+		*procgloballist = proc;
+	}
 
 	/* Update shared estimate of spins_per_delay */
 	ProcGlobal->spins_per_delay = update_spins_per_delay(ProcGlobal->spins_per_delay);
@@ -942,9 +1000,31 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
 	bool		allow_autovacuum_cancel = true;
 	int			myWaitStatus;
 	PGPROC	   *proc;
+	PGPROC	   *leader = MyProc->lockGroupLeader;
 	int			i;
 
 	/*
+	 * If group locking is in use, locks held my members of my locking group
+	 * need to be included in myHeldLocks.
+	 */
+	if (leader != NULL)
+	{
+		SHM_QUEUE  *procLocks = &(lock->procLocks);
+		PROCLOCK   *otherproclock;
+
+		otherproclock = (PROCLOCK *)
+			SHMQueueNext(procLocks, procLocks, offsetof(PROCLOCK, lockLink));
+		while (otherproclock != NULL)
+		{
+			if (otherproclock->groupLeader == leader)
+				myHeldLocks |= otherproclock->holdMask;
+			otherproclock = (PROCLOCK *)
+				SHMQueueNext(procLocks, &otherproclock->lockLink,
+							 offsetof(PROCLOCK, lockLink));
+		}
+	}
+
+	/*
 	 * Determine where to add myself in the wait queue.
 	 *
 	 * Normally I should go at the end of the queue.  However, if I already
@@ -968,6 +1048,15 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
 		proc = (PGPROC *) waitQueue->links.next;
 		for (i = 0; i < waitQueue->size; i++)
 		{
+			/*
+			 * If we're part of the same locking group as this waiter, its
+			 * locks neither conflict with ours nor contribute to aheadRequsts.
+			 */
+			if (leader != NULL && leader == proc->lockGroupLeader)
+			{
+				proc = (PGPROC *) proc->links.next;
+				continue;
+			}
 			/* Must he wait for me? */
 			if (lockMethodTable->conflictTab[proc->waitLockMode] & myHeldLocks)
 			{
@@ -1658,3 +1747,66 @@ ProcSendSignal(int pid)
 		SetLatch(&proc->procLatch);
 	}
 }
+
+/*
+ * BecomeLockGroupLeader - designate process as lock group leader
+ *
+ * Once this function has returned, other processes can join the lock group
+ * by calling BecomeLockGroupMember.
+ */
+void
+BecomeLockGroupLeader(void)
+{
+	LWLock	   *leader_lwlock;
+
+	/* If we already did it, we don't need to do it again. */
+	if (MyProc->lockGroupLeader == MyProc)
+		return;
+
+	/* We had better not be a follower. */
+	Assert(MyProc->lockGroupLeader == NULL);
+
+	/* Create single-member group, containing only ourselves. */
+	leader_lwlock = LockHashPartitionLockByProc(MyProc);
+	LWLockAcquire(leader_lwlock, LW_EXCLUSIVE);
+	MyProc->lockGroupLeader = MyProc;
+	MyProc->lockGroupLeaderIdentifier = MyProcPid;
+	dlist_push_head(&MyProc->lockGroupMembers, &MyProc->lockGroupLink);
+	LWLockRelease(leader_lwlock);
+}
+
+/*
+ * BecomeLockGroupMember - designate process as lock group member
+ *
+ * This is pretty straightforward except for the possibility that the leader
+ * whose group we're trying to join might exit before we manage to do so;
+ * and the PGPROC might get recycled for an unrelated process.  To avoid
+ * that, we require the caller to pass the PID of the intended PGPROC as
+ * an interlock.  Returns true if we successfully join the intended lock
+ * group, and false if not.
+ */
+bool
+BecomeLockGroupMember(PGPROC *leader, int pid)
+{
+	LWLock	   *leader_lwlock;
+	bool		ok = false;
+
+	/* Group leader can't become member of group */
+	Assert(MyProc != leader);
+
+	/* PID must be valid. */
+	Assert(pid != 0);
+
+	/* Try to join the group. */
+	leader_lwlock = LockHashPartitionLockByProc(MyProc);
+	LWLockAcquire(leader_lwlock, LW_EXCLUSIVE);
+	if (leader->lockGroupLeaderIdentifier == pid)
+	{
+		ok = true;
+		MyProc->lockGroupLeader = leader;
+		dlist_push_tail(&leader->lockGroupMembers, &MyProc->lockGroupLink);
+	}
+	LWLockRelease(leader_lwlock);
+
+	return ok;
+}
diff --git a/src/include/storage/lock.h b/src/include/storage/lock.h
index 43eca86..6b4e365 100644
--- a/src/include/storage/lock.h
+++ b/src/include/storage/lock.h
@@ -346,6 +346,7 @@ typedef struct PROCLOCK
 	PROCLOCKTAG tag;			/* unique identifier of proclock object */
 
 	/* data */
+	PGPROC	   *groupLeader;	/* group leader, or NULL if no lock group */
 	LOCKMASK	holdMask;		/* bitmask for lock types currently held */
 	LOCKMASK	releaseMask;	/* bitmask for lock types to be released */
 	SHM_QUEUE	lockLink;		/* list link in LOCK's list of proclocks */
@@ -457,7 +458,6 @@ typedef enum
 								 * worker */
 } DeadLockState;
 
-
 /*
  * The lockmgr's shared hash tables are partitioned to reduce contention.
  * To determine which partition a given locktag belongs to, compute the tag's
@@ -473,6 +473,17 @@ typedef enum
 	(&MainLWLockArray[LOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
 
 /*
+ * The deadlock detector needs to be able to access lockGroupLeader and
+ * related fields in the PGPROC, so we arrange for those fields to be protected
+ * by one of the lock hash partition locks.  Since the deadlock detector
+ * acquires all such locks anyway, this makes it safe for it to access these
+ * fields without doing anything extra.  To avoid contention as much as
+ * possible, we map different PGPROCs to different partition locks.
+ */
+#define LockHashPartitionLockByProc(p) \
+	LockHashPartitionLock((p)->pgprocno)
+
+/*
  * function prototypes
  */
 extern void InitLocks(void);
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 3441288..66ab255 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -155,6 +155,15 @@ struct PGPROC
 	bool		fpVXIDLock;		/* are we holding a fast-path VXID lock? */
 	LocalTransactionId fpLocalTransactionId;	/* lxid for fast-path VXID
 												 * lock */
+
+	/*
+	 * Support for lock groups.  Use LockHashPartitionLockByProc to get the
+	 * LWLock protecting these fields.
+	 */
+	int			lockGroupLeaderIdentifier;	/* MyProcPid, if I'm a leader */
+	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a follower */
+	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
+	dlist_node  lockGroupLink;		/* my member link, if I'm a member */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
@@ -272,4 +281,7 @@ extern void LockErrorCleanup(void);
 extern void ProcWaitForSignal(void);
 extern void ProcSendSignal(int pid);
 
+extern void BecomeLockGroupLeader(void);
+extern bool BecomeLockGroupMember(PGPROC *leader, int pid);
+
 #endif   /* PROC_H */
-- 
2.5.4 (Apple Git-61)

test-group-locking-v1.patchapplication/x-patch; name=test-group-locking-v1.patchDownload
From b101d27611dd42109f11b09ab3ba65dba91e6341 Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Thu, 21 Jan 2016 14:33:07 -0500
Subject: [PATCH 2/3] contrib module test_group_deadlocks, not for commit.

Amit Kapila
---
 contrib/Makefile                                   |  1 +
 contrib/test_group_deadlocks/Makefile              | 19 ++++++++
 .../test_group_deadlocks--1.0.sql                  | 15 ++++++
 .../test_group_deadlocks/test_group_deadlocks.c    | 57 ++++++++++++++++++++++
 .../test_group_deadlocks.control                   |  5 ++
 5 files changed, 97 insertions(+)
 create mode 100644 contrib/test_group_deadlocks/Makefile
 create mode 100644 contrib/test_group_deadlocks/test_group_deadlocks--1.0.sql
 create mode 100644 contrib/test_group_deadlocks/test_group_deadlocks.c
 create mode 100644 contrib/test_group_deadlocks/test_group_deadlocks.control

diff --git a/contrib/Makefile b/contrib/Makefile
index bd251f6..ff3c54d 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -43,6 +43,7 @@ SUBDIRS = \
 		tablefunc	\
 		tcn		\
 		test_decoding	\
+		test_group_deadlocks	\
 		tsm_system_rows \
 		tsm_system_time \
 		tsearch2	\
diff --git a/contrib/test_group_deadlocks/Makefile b/contrib/test_group_deadlocks/Makefile
new file mode 100644
index 0000000..057448c
--- /dev/null
+++ b/contrib/test_group_deadlocks/Makefile
@@ -0,0 +1,19 @@
+# contrib/test_group_deadlocks/Makefile
+
+MODULE_big = test_group_deadlocks
+OBJS = test_group_deadlocks.o $(WIN32RES)
+
+EXTENSION = test_group_deadlocks
+DATA = test_group_deadlocks--1.0.sql
+PGFILEDESC = "test_group_deadlocks - participate in group locking"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/test_group_deadlocks
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/test_group_deadlocks/test_group_deadlocks--1.0.sql b/contrib/test_group_deadlocks/test_group_deadlocks--1.0.sql
new file mode 100644
index 0000000..377c363
--- /dev/null
+++ b/contrib/test_group_deadlocks/test_group_deadlocks--1.0.sql
@@ -0,0 +1,15 @@
+/* contrib/test_group_deadlocks/test_group_deadlocks--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION test_group_deadlocks" to load this file. \quit
+
+-- Register the function.
+CREATE FUNCTION become_lock_group_leader()
+RETURNS pg_catalog.void
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+CREATE FUNCTION become_lock_group_member(pid pg_catalog.int4)
+RETURNS pg_catalog.bool
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
diff --git a/contrib/test_group_deadlocks/test_group_deadlocks.c b/contrib/test_group_deadlocks/test_group_deadlocks.c
new file mode 100644
index 0000000..f3d980a
--- /dev/null
+++ b/contrib/test_group_deadlocks/test_group_deadlocks.c
@@ -0,0 +1,57 @@
+/*-------------------------------------------------------------------------
+ *
+ * test_group_deadlocks.c
+ *		  group locking utilities
+ *
+ * Copyright (c) 2010-2014, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *		  contrib/test_group_deadlocks/test_group_deadlocks.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "storage/proc.h"
+#include "storage/procarray.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(become_lock_group_leader);
+PG_FUNCTION_INFO_V1(become_lock_group_member);
+
+
+/*
+ * become_lock_group_leader
+ *
+ * This function makes current backend process as lock group
+ * leader.
+ */
+Datum
+become_lock_group_leader(PG_FUNCTION_ARGS)
+{
+	BecomeLockGroupLeader();
+
+	PG_RETURN_VOID();
+}
+
+/*
+ * become_lock_group_member
+ *
+ * This function makes current backend process as lock group
+ * member of the group owned by the process whose pid is passed
+ * as first argument.
+ */
+Datum
+become_lock_group_member(PG_FUNCTION_ARGS)
+{
+	bool		member;
+	PGPROC		*procleader;
+	int32		pid = PG_GETARG_INT32(0);
+
+	procleader = BackendPidGetProc(pid);
+	member = BecomeLockGroupMember(procleader, pid);
+
+	PG_RETURN_BOOL(member);
+}
diff --git a/contrib/test_group_deadlocks/test_group_deadlocks.control b/contrib/test_group_deadlocks/test_group_deadlocks.control
new file mode 100644
index 0000000..e2dcc71
--- /dev/null
+++ b/contrib/test_group_deadlocks/test_group_deadlocks.control
@@ -0,0 +1,5 @@
+# test_group_locking extension
+comment = 'become part of group'
+default_version = '1.0'
+module_pathname = '$libdir/test_group_deadlocks'
+relocatable = true
-- 
2.5.4 (Apple Git-61)

force-parallel-mode-v1.patchapplication/x-patch; name=force-parallel-mode-v1.patchDownload
From c6b2249ce16f278287dcee0710ca469c271c5cab Mon Sep 17 00:00:00 2001
From: Robert Haas <rhaas@postgresql.org>
Date: Wed, 30 Sep 2015 18:35:40 -0400
Subject: [PATCH 3/3] Introduce a new GUC force_parallel_mode for testing
 purposes.

When force_parallel_mode = true, we enable the parallel mode restrictions
for all queries for which this is believed to be safe.  For the subset of
those queries believed to be safe to run entirely within a worker, we spin
up a worker and run the query there instead of running it in the
original process.

Robert Haas, with help from Amit Kapila and Rushabh Lathia.
---
 doc/src/sgml/config.sgml                      | 45 +++++++++++++++++
 src/backend/access/transam/parallel.c         |  4 +-
 src/backend/commands/explain.c                | 14 ++++-
 src/backend/nodes/copyfuncs.c                 |  1 +
 src/backend/nodes/outfuncs.c                  |  2 +
 src/backend/nodes/readfuncs.c                 |  1 +
 src/backend/optimizer/plan/createplan.c       |  5 ++
 src/backend/optimizer/plan/planner.c          | 73 ++++++++++++++++++++-------
 src/backend/utils/misc/guc.c                  | 24 +++++++++
 src/backend/utils/misc/postgresql.conf.sample |  1 +
 src/include/nodes/plannodes.h                 |  1 +
 src/include/nodes/relation.h                  |  3 ++
 src/include/optimizer/planmain.h              |  9 ++++
 13 files changed, 163 insertions(+), 20 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 392eb70..de84b77 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3802,6 +3802,51 @@ SELECT * FROM parent WHERE key = 2400;
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-force-parallel-mode" xreflabel="force_parallel_mode">
+      <term><varname>force_parallel_mode</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>force_parallel_mode</> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Allows the use of parallel queries for testing purposes even in cases
+        where no performance benefit is expected.
+        The allowed values of <varname>force_parallel_mode</> are
+        <literal>off</> (use parallel mode only when it is expected to improve
+        performance), <literal>on</> (force parallel query for all queries
+        for which it is thought to be safe), and <literal>regress</> (like
+        on, but with additional behavior changes to facilitate automated
+        regression testing).
+       </para>
+
+       <para>
+        More specifically, setting this value to <literal>on</> will add
+        a <literal>Gather</> node to the top of any query plan for which this
+        appears to be safe, so that the query runs inside of a parallel worker.
+        Even when a parallel worker is not available or cannot be used,
+        operations such as starting a subtransaction that would be prohibited
+        in a parallel query context will be prohibited unless the planner
+        believes that this will cause the query to fail.  If failures or
+        unexpected results occur when this option is set, some functions used
+        by the query may need to be marked <literal>PARALLEL UNSAFE</literal>
+        (or, possibly, <literal>PARALLEL RESTRICTED</literal>).
+       </para>
+
+       <para>
+        Setting this value to <literal>regress</> has all of the same effects
+        as setting it to <literal>on</> plus some additional effect that are
+        intended to facilitate automated regression testing.  Normally,
+        messages from a parallel worker are prefixed with a context line,
+        but a setting of <literal>regress</> suppresses this to guarantee
+        reproducible results.  Also, the <literal>Gather</> nodes added to
+        plans by this setting are hidden from the <literal>EXPLAIN</> output
+        so that the output matches what would be obtained if this setting
+        were turned <literal>off</>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
    </sect1>
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index bf2e691..4f91cd0 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -22,6 +22,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqmq.h"
 #include "miscadmin.h"
+#include "optimizer/planmain.h"
 #include "storage/ipc.h"
 #include "storage/sinval.h"
 #include "storage/spin.h"
@@ -1079,7 +1080,8 @@ ParallelExtensionTrampoline(dsm_segment *seg, shm_toc *toc)
 static void
 ParallelErrorContext(void *arg)
 {
-	errcontext("parallel worker, PID %d", *(int32 *) arg);
+	if (force_parallel_mode != FORCE_PARALLEL_REGRESS)
+		errcontext("parallel worker, PID %d", *(int32 *) arg);
 }
 
 /*
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 25d8ca0..ee13136 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -23,6 +23,7 @@
 #include "foreign/fdwapi.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/planmain.h"
 #include "parser/parsetree.h"
 #include "rewrite/rewriteHandler.h"
 #include "tcop/tcopprot.h"
@@ -572,6 +573,7 @@ void
 ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc)
 {
 	Bitmapset  *rels_used = NULL;
+	PlanState *ps;
 
 	Assert(queryDesc->plannedstmt != NULL);
 	es->pstmt = queryDesc->plannedstmt;
@@ -580,7 +582,17 @@ ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc)
 	es->rtable_names = select_rtable_names_for_explain(es->rtable, rels_used);
 	es->deparse_cxt = deparse_context_for_plan_rtable(es->rtable,
 													  es->rtable_names);
-	ExplainNode(queryDesc->planstate, NIL, NULL, NULL, es);
+
+	/*
+	 * Sometimes we mark a Gather node as "invisible", which means that it's
+	 * not displayed in EXPLAIN output.  The purpose of this is to allow
+	 * running regression tests with force_parallel_mode=regress to get the
+	 * same results as running the same tests with force_parallel_mode=off.
+	 */
+	ps = queryDesc->planstate;
+	if (IsA(ps, GatherState) &&((Gather *) ps->plan)->invisible)
+		ps = outerPlanState(ps);
+	ExplainNode(ps, NIL, NULL, NULL, es);
 }
 
 /*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index a8b79fa..e54d174 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -334,6 +334,7 @@ _copyGather(const Gather *from)
 	 */
 	COPY_SCALAR_FIELD(num_workers);
 	COPY_SCALAR_FIELD(single_copy);
+	COPY_SCALAR_FIELD(invisible);
 
 	return newnode;
 }
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index b487c00..97b7fef 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -443,6 +443,7 @@ _outGather(StringInfo str, const Gather *node)
 
 	WRITE_INT_FIELD(num_workers);
 	WRITE_BOOL_FIELD(single_copy);
+	WRITE_BOOL_FIELD(invisible);
 }
 
 static void
@@ -1826,6 +1827,7 @@ _outPlannerGlobal(StringInfo str, const PlannerGlobal *node)
 	WRITE_BOOL_FIELD(hasRowSecurity);
 	WRITE_BOOL_FIELD(parallelModeOK);
 	WRITE_BOOL_FIELD(parallelModeNeeded);
+	WRITE_BOOL_FIELD(wholePlanParallelSafe);
 	WRITE_BOOL_FIELD(hasForeignJoin);
 }
 
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 6c46151..e4d41ee 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -2053,6 +2053,7 @@ _readGather(void)
 
 	READ_INT_FIELD(num_workers);
 	READ_BOOL_FIELD(single_copy);
+	READ_BOOL_FIELD(invisible);
 
 	READ_DONE();
 }
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 54ff7f6..6e0db08 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -212,6 +212,10 @@ create_plan(PlannerInfo *root, Path *best_path)
 	/* Recursively process the path tree */
 	plan = create_plan_recurse(root, best_path);
 
+	/* Update parallel safety information if needed. */
+	if (!best_path->parallel_safe)
+		root->glob->wholePlanParallelSafe = false;
+
 	/* Check we successfully assigned all NestLoopParams to plan nodes */
 	if (root->curOuterParams != NIL)
 		elog(ERROR, "failed to assign all NestLoopParams to plan nodes");
@@ -4829,6 +4833,7 @@ make_gather(List *qptlist,
 	plan->righttree = NULL;
 	node->num_workers = nworkers;
 	node->single_copy = single_copy;
+	node->invisible	= false;
 
 	return node;
 }
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index a09b4b5..a3cc274 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -48,10 +48,12 @@
 #include "storage/dsm_impl.h"
 #include "utils/rel.h"
 #include "utils/selfuncs.h"
+#include "utils/syscache.h"
 
 
-/* GUC parameter */
+/* GUC parameters */
 double		cursor_tuple_fraction = DEFAULT_CURSOR_TUPLE_FRACTION;
+int			force_parallel_mode = FORCE_PARALLEL_OFF;
 
 /* Hook for plugins to get control in planner() */
 planner_hook_type planner_hook = NULL;
@@ -230,25 +232,31 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 		!has_parallel_hazard((Node *) parse, true);
 
 	/*
-	 * glob->parallelModeOK should tell us whether it's necessary to impose
-	 * the parallel mode restrictions, but we don't actually want to impose
-	 * them unless we choose a parallel plan, so that people who mislabel
-	 * their functions but don't use parallelism anyway aren't harmed.
-	 * However, it's useful for testing purposes to be able to force the
-	 * restrictions to be imposed whenever a parallel plan is actually chosen
-	 * or not.
+	 * glob->parallelModeNeeded should tell us whether it's necessary to
+	 * impose the parallel mode restrictions, but we don't actually want to
+	 * impose them unless we choose a parallel plan, so that people who
+	 * mislabel their functions but don't use parallelism anyway aren't
+	 * harmed. But when force_parallel_mode is set, we enable the restrictions
+	 * whenever possible for testing purposes.
 	 *
-	 * (It's been suggested that we should always impose these restrictions
-	 * whenever glob->parallelModeOK is true, so that it's easier to notice
-	 * incorrectly-labeled functions sooner.  That might be the right thing to
-	 * do, but for now I've taken this approach.  We could also control this
-	 * with a GUC.)
+	 * glob->wholePlanParallelSafe should tell us whether it's OK to stick a
+	 * Gather node on top of the entire plan.  However, it only needs to be
+	 * accurate when force_parallel_mode is 'on' or 'regress', so we don't
+	 * bother doing the work otherwise.  The value we set here is just a
+	 * preliminary guess; it may get changed from true to false later, but
+	 * not visca versa.
 	 */
-#ifdef FORCE_PARALLEL_MODE
-	glob->parallelModeNeeded = glob->parallelModeOK;
-#else
-	glob->parallelModeNeeded = false;
-#endif
+	if (force_parallel_mode == FORCE_PARALLEL_OFF || !glob->parallelModeOK)
+	{
+		glob->parallelModeNeeded = false;
+		glob->wholePlanParallelSafe = false;	/* either false or don't care */
+	}
+	else
+	{
+		glob->parallelModeNeeded = true;
+		glob->wholePlanParallelSafe =
+			!has_parallel_hazard((Node *) parse, false);
+	}
 
 	/* Determine what fraction of the plan is likely to be scanned */
 	if (cursorOptions & CURSOR_OPT_FAST_PLAN)
@@ -293,6 +301,35 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 	}
 
 	/*
+	 * At present, we don't copy subplans to workers.  The presence of a
+	 * subplan in one part of the plan doesn't preclude the use of parallelism
+	 * in some other part of the plan, but it does preclude the possibility of
+	 * regarding the entire plan parallel-safe.
+	 */
+	if (glob->subplans != NULL)
+		glob->wholePlanParallelSafe = false;
+
+	/*
+	 * Optionally add a Gather node for testing purposes, provided this is
+	 * actually a safe thing to do.
+	 */
+	if (glob->wholePlanParallelSafe &&
+		force_parallel_mode != FORCE_PARALLEL_OFF)
+	{
+		Gather	   *gather = makeNode(Gather);
+
+		gather->plan.targetlist = top_plan->targetlist;
+		gather->plan.qual = NIL;
+		gather->plan.lefttree = top_plan;
+		gather->plan.righttree = NULL;
+		gather->num_workers = 1;
+		gather->single_copy = true;
+		gather->invisible = (force_parallel_mode == FORCE_PARALLEL_REGRESS);
+		root->glob->parallelModeNeeded = true;
+		top_plan = &gather->plan;
+	}
+
+	/*
 	 * If any Params were generated, run through the plan tree and compute
 	 * each plan node's extParam/allParam sets.  Ideally we'd merge this into
 	 * set_plan_references' tree traversal, but for now it has to be separate
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 38ba82f..14212ee 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -379,6 +379,19 @@ static const struct config_enum_entry huge_pages_options[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry force_parallel_mode_options[] = {
+	{"off", FORCE_PARALLEL_OFF, false},
+	{"on", FORCE_PARALLEL_ON, false},
+	{"regress", FORCE_PARALLEL_REGRESS, false},
+	{"true", FORCE_PARALLEL_ON, true},
+	{"false", FORCE_PARALLEL_OFF, true},
+	{"yes", FORCE_PARALLEL_ON, true},
+	{"no", FORCE_PARALLEL_OFF, true},
+	{"1", FORCE_PARALLEL_ON, true},
+	{"0", FORCE_PARALLEL_OFF, true},
+	{NULL, 0, false}
+};
+
 /*
  * Options for enum values stored in other modules
  */
@@ -863,6 +876,7 @@ static struct config_bool ConfigureNamesBool[] =
 		true,
 		NULL, NULL, NULL
 	},
+
 	{
 		{"geqo", PGC_USERSET, QUERY_TUNING_GEQO,
 			gettext_noop("Enables genetic query optimization."),
@@ -3672,6 +3686,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"force_parallel_mode", PGC_USERSET, QUERY_TUNING_OTHER,
+			gettext_noop("Forces use of parallel query facilities."),
+			gettext_noop("If possible, run query using a parallel worker and with parallel restrictions.")
+		},
+		&force_parallel_mode,
+		FORCE_PARALLEL_OFF, force_parallel_mode_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 029114f..09b2003 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -313,6 +313,7 @@
 #from_collapse_limit = 8
 #join_collapse_limit = 8		# 1 disables collapsing of explicit
 					# JOIN clauses
+#force_parallel_mode = off
 
 
 #------------------------------------------------------------------------------
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 55d6bbe..ae224cf 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -775,6 +775,7 @@ typedef struct Gather
 	Plan		plan;
 	int			num_workers;
 	bool		single_copy;
+	bool		invisible;		/* suppress EXPLAIN display (for testing)? */
 } Gather;
 
 /* ----------------
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 9492598..5c22679 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -108,6 +108,9 @@ typedef struct PlannerGlobal
 	bool		parallelModeOK; /* parallel mode potentially OK? */
 
 	bool		parallelModeNeeded;		/* parallel mode actually required? */
+
+	bool		wholePlanParallelSafe;	/* is the entire plan parallel safe? */
+
 	bool		hasForeignJoin;	/* does have a pushed down foreign join */
 } PlannerGlobal;
 
diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h
index 7ae7367..eaa642b 100644
--- a/src/include/optimizer/planmain.h
+++ b/src/include/optimizer/planmain.h
@@ -17,9 +17,18 @@
 #include "nodes/plannodes.h"
 #include "nodes/relation.h"
 
+/* possible values for force_parallel_mode */
+typedef enum
+{
+	FORCE_PARALLEL_OFF,
+	FORCE_PARALLEL_ON,
+	FORCE_PARALLEL_REGRESS
+} ForceParallelMode;
+
 /* GUC parameters */
 #define DEFAULT_CURSOR_TUPLE_FRACTION 0.1
 extern double cursor_tuple_fraction;
+extern int force_parallel_mode;
 
 /* query_planner callback to compute query_pathkeys */
 typedef void (*query_pathkeys_callback) (PlannerInfo *root, void *extra);
-- 
2.5.4 (Apple Git-61)

#19Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#18)
Re: a raft of parallelism-related bug fixes

On 2016-02-02 15:41:45 -0500, Robert Haas wrote:

group-locking-v1.patch is a vastly improved version of the group
locking patch that we discussed, uh, extensively last year. I realize
that there was a lot of doubt about this approach, but I still believe
it's the right approach, I have put a lot of work into making it work
correctly, I don't think anyone has come up with a really plausible
alternative approach (except one other approach I tried which turned
out to work but with significantly more restrictions), and I'm
committed to fixing it in whatever way is necessary if it turns out to
be broken, even if that amounts to a full rewrite. Review is welcome,
but I honestly believe it's a good idea to get this into the tree
sooner rather than later at this point, because automated regression
testing falls to pieces without these changes, and I believe that
automated regression testing is a really good idea to shake out
whatever bugs we may have in the parallel query stuff. The code in
this patch is all mine, but Amit Kapila deserves credit as co-author
for doing a lot of prototyping (that ended up getting tossed) and
testing. This patch includes comments and an addition to
src/backend/storage/lmgr/README which explain in more detail what this
patch does, how it does it, and why that's OK.

I see you pushed group locking support. I do wonder if somebody has
actually reviewed this? On a quick scrollthrough it seems fairly
invasive, touching some parts where bugs are really hard to find.

I realize that this stuff has all been brewing long, and that there's
still a lot to do. So you gotta keep moving. And I'm not sure that
there's anything wrong or if there's any actually better approach. But
pushing an unreviewed, complex patch, that originated in a thread
orginally about different relatively small/mundane items, for a
contentious issue, a few days after the initial post. Hm. Not sure how
you'd react if you weren't the author.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#19)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 10:17 AM, Andres Freund <andres@anarazel.de> wrote:

On 2016-02-02 15:41:45 -0500, Robert Haas wrote:

group-locking-v1.patch is a vastly improved version of the group
locking patch that we discussed, uh, extensively last year. I realize
that there was a lot of doubt about this approach, but I still believe
it's the right approach, I have put a lot of work into making it work
correctly, I don't think anyone has come up with a really plausible
alternative approach (except one other approach I tried which turned
out to work but with significantly more restrictions), and I'm
committed to fixing it in whatever way is necessary if it turns out to
be broken, even if that amounts to a full rewrite. Review is welcome,
but I honestly believe it's a good idea to get this into the tree
sooner rather than later at this point, because automated regression
testing falls to pieces without these changes, and I believe that
automated regression testing is a really good idea to shake out
whatever bugs we may have in the parallel query stuff. The code in
this patch is all mine, but Amit Kapila deserves credit as co-author
for doing a lot of prototyping (that ended up getting tossed) and
testing. This patch includes comments and an addition to
src/backend/storage/lmgr/README which explain in more detail what this
patch does, how it does it, and why that's OK.

I see you pushed group locking support. I do wonder if somebody has
actually reviewed this? On a quick scrollthrough it seems fairly
invasive, touching some parts where bugs are really hard to find.

I realize that this stuff has all been brewing long, and that there's
still a lot to do. So you gotta keep moving. And I'm not sure that
there's anything wrong or if there's any actually better approach. But
pushing an unreviewed, complex patch, that originated in a thread
orginally about different relatively small/mundane items, for a
contentious issue, a few days after the initial post. Hm. Not sure how
you'd react if you weren't the author.

Probably not very well. Do you want me to revert it?

I mean, look. Without that patch, parallel query is definitely
broken. Just revert the patch and try running the regression tests
with force_parallel_mode=regress and max_parallel_degree>0. It hangs
all over the place. With the patch, every regression test suite we
have runs cleanly with those settings. Without the patch, it's
trivial to construct a test case where parallel query experiences an
undetected deadlock. With the patch, it appears to work reliably.
Could there bugs someplace? Yes, there absolutely could. Do I really
think anybody was going to spend the time to understand deadlock.c
well enough to verify my changes? No, I don't. What I think would
have happened is that the patch would have sat around like an
albatross around my neck - totally ignored by everyone - until the end
of the last CF, and then the discussion would have gone one of three
ways:

1. Boy, this patch is complicated and I don't understand it. Let's
reject it, even though without it parallel query is trivially broken!
Uh, we'll just let parallel query be broken.
2. Like #1, but we rip out parallel query in its entirety on the eve of beta.
3. Oh well, Robert says we need this, I guess we'd better let him commit it.

I don't find any of those options to be better than the status quo.
If the patch is broken, another two months of having in the tree give
us a better chance of finding the bugs, especially because, combined
with the other patch which I also pushed, it enables *actual automated
regression testing* of the parallelism code, which I personally think
is a really good thing - and I'd like to see the buildfarm doing that
as soon as possible, so that we can find some of those bugs before
we're deep in beta. Not just bugs in group locking but all sorts of
parallelism bugs that might be revealed by end-to-end testing. The
*entire stack of patches* that began this thread was a response to
problems that were found by the automated testing that you can't do
without this patch. Those bug fixes resulted in a huge increase in
the robustness of parallel query, and that would not have happened
without this code. Every single one of those problems, some of them
in commits dating back years, was detected by the same method: run the
regression tests with parallel mode and parallel workers used for
every query for which that seems to be safe.

And, by the way, the patch, aside from the deadlock.c portion, was
posted back in October, admittedly without much fanfare, but nobody
reviewed that or any other patch on this thread. If I'd waited for
those reviews to come in, parallel query would not be committed now,
nor probably in 9.6, nor probably in 9.7 or 9.8 either. The whole
project would just be impossible on its face. It would be impossible
in the first instance if I did not have a commit bit, because there is
just not enough committer bandwidth - even reviewer bandwidth more
generally - to review the number of patches that I've submitted
related to parallelism, so in the end some, perhaps many, of those are
going to be committed mostly on the strength of my personal opinion
that committing them is better than not committing them. I am going
to have a heck of a lot of egg on my face if it turns out that I've
been too aggressive in pushing this stuff into the tree. But,
basically, the alternative is that we don't get the feature, and I
think the feature is important enough to justify taking some risk.

I think it's myopic to say "well, but this patch might have bugs".
Very true. But also, all the other parallelism patches that are
already committed or that are still under review but which can't be
properly tested without this patch might have bugs, too, so you've got
to weigh the risk that this patch might get better if I wait longer to
commit it against the possibility that not having committed it reduces
the chances of finding bugs elsewhere. I don't want it to seem like
I'm forcing this down the community's throat - I don't have a right to
do that, and I will certainly revert this patch if that is the
consensus. But that is not what I think best myself. What I think
would be better is to (1) make an effort to get the buildfarm testing
which this patch enables up and running as soon as possible and (2)
for somebody to read over the committed code and raise any issues that
they find. Or, for that matter, to read over the committed code for
any of the *other* parallelism patches and raise any issues that they
find with *that* code. There's certainly scads of code here and this
is far from the only bit that might have bugs.

Oh: another thing that I would like to do is commit the isolation
tests I wrote for the deadlock detector a while back, which nobody has
reviewed either, though Tom and Alvaro seemed reasonably positive
about the concept. Right now, the deadlock.c part of this patch isn't
tested at all by any of our regression test suites, because NOTHING in
deadlock.c is tested by any of our regression test suites. You can
blow it up with dynamite and the regression tests are perfectly happy,
and that's pretty scary.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Joshua D. Drake
jd@commandprompt.com
In reply to: Robert Haas (#20)
Re: a raft of parallelism-related bug fixes

On 02/08/2016 10:45 AM, Robert Haas wrote:

On Mon, Feb 8, 2016 at 10:17 AM, Andres Freund <andres@anarazel.de> wrote:

On 2016-02-02 15:41:45 -0500, Robert Haas wrote:

I realize that this stuff has all been brewing long, and that there's
still a lot to do. So you gotta keep moving. And I'm not sure that
there's anything wrong or if there's any actually better approach. But
pushing an unreviewed, complex patch, that originated in a thread
orginally about different relatively small/mundane items, for a
contentious issue, a few days after the initial post. Hm. Not sure how
you'd react if you weren't the author.

Probably not very well. Do you want me to revert it?

If I am off base, please feel free to yell Latin at me again but isn't
this exactly what different trees are for in Git? Would it be possible
to say:

Robert says, "Hey pull XYZ, run ABC tests. They are what the parallelism
fixes do"?

I can't review this patch but I can run a test suite on a number of
platforms and see if it behaves as expected.

albatross around my neck - totally ignored by everyone - until the end
of the last CF, and then the discussion would have gone one of three
ways:

1. Boy, this patch is complicated and I don't understand it. Let's
reject it, even though without it parallel query is trivially broken!
Uh, we'll just let parallel query be broken.
2. Like #1, but we rip out parallel query in its entirety on the eve of beta.
3. Oh well, Robert says we need this, I guess we'd better let him commit it.

4. We need to push the release so we can test this.

I don't find any of those options to be better than the status quo.
If the patch is broken, another two months of having in the tree give
us a better chance of finding the bugs, especially because, combined

I think this further points to the need for more reviewers and less
feature pushes. There are fundamental features that we could use, this
is one of them. It is certainly more important than say pgLogical or BDR
(not that those aren't useful but that we do have external solutions for
that problem).

Oh: another thing that I would like to do is commit the isolation
tests I wrote for the deadlock detector a while back, which nobody has
reviewed either, though Tom and Alvaro seemed reasonably positive
about the concept. Right now, the deadlock.c part of this patch isn't
tested at all by any of our regression test suites, because NOTHING in
deadlock.c is tested by any of our regression test suites. You can
blow it up with dynamite and the regression tests are perfectly happy,
and that's pretty scary.

Test test test. Please commit.

Sincerely,

JD

--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#22Robert Haas
robertmhaas@gmail.com
In reply to: Joshua D. Drake (#21)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 2:00 PM, Joshua D. Drake <jd@commandprompt.com> wrote:

If I am off base, please feel free to yell Latin at me again but isn't this
exactly what different trees are for in Git? Would it be possible to say:

Robert says, "Hey pull XYZ, run ABC tests. They are what the parallelism
fixes do"?

I can't review this patch but I can run a test suite on a number of
platforms and see if it behaves as expected.

Sure, I'd love to have the ability to push a branch into the buildfarm
and have the tests get run on all the buildfarm machines and let that
bake for a while before putting it into the main tree. The problem
here is that the complicated part of this patch is something that's
only going to be tested in very rare cases. The simple part of the
patch, which handles the simple-deadlock case, is easy to hit,
although apparently zero people other than Amit and I have found it in
the few months since parallel sequential scan was committed, which
makes me thing people haven't tried very hard to break any part of
parallel query, which is a shame. The really hairy part is in
deadlock.c, and it's actually very hard to hit that case. It won't be
hit in real life except in pretty rare circumstances. So testing is
good, but you not only need to know what you are testing but probably
have an automated tool that can run the test a gazillion times in a
loop, or be really clever and find a test case that Amit and I didn't
foresee. And the reality is that getting anybody independent of the
parallel query effort to take an interested in deep testing has not
gone anywhere at all up until now. I'd be happy for that change,
whether because of this commit or for any other reason.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#20)
Re: a raft of parallelism-related bug fixes

Robert Haas <robertmhaas@gmail.com> writes:

Oh: another thing that I would like to do is commit the isolation
tests I wrote for the deadlock detector a while back, which nobody has
reviewed either, though Tom and Alvaro seemed reasonably positive
about the concept.

Possibly the reason that wasn't reviewed is that it's not in the
commitfest list (or at least if it is, I sure don't see it).

Having said that, I don't have much of a problem with you pushing it
anyway, unless it will add 15 minutes to make check-world or some such.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#24Joshua D. Drake
jd@commandprompt.com
In reply to: Robert Haas (#22)
Re: a raft of parallelism-related bug fixes

On 02/08/2016 11:24 AM, Robert Haas wrote:

On Mon, Feb 8, 2016 at 2:00 PM, Joshua D. Drake <jd@commandprompt.com> wrote:

If I am off base, please feel free to yell Latin at me again but isn't this
exactly what different trees are for in Git? Would it be possible to say:

Robert says, "Hey pull XYZ, run ABC tests. They are what the parallelism
fixes do"?

I can't review this patch but I can run a test suite on a number of
platforms and see if it behaves as expected.

Sure, I'd love to have the ability to push a branch into the buildfarm
and have the tests get run on all the buildfarm machines and let that
bake for a while before putting it into the main tree. The problem
here is that the complicated part of this patch is something that's
only going to be tested in very rare cases. The simple part of the

I have no problem running any test cases you wish on a branch in a loop
for the next week and reporting back any errors.

Where this gets tricky is the tooling itself. For me to be able to do so
(and others really) I need to be able to do this:

* Download (preferably a tarball but I can do a git pull)
* Exact instructions on how to set up the tests
* Exact instructions on how to run the tests
* Exact instructions on how to report the tests

If anyone takes the time to do that, I will take the time and resources
to run them.

What I can't do, is fiddle around trying to figure out how to set this
stuff up. I don't have the time and it isn't productive for me. I don't
think I am the only one in this boat.

Let's be honest, a lot of people won't even bother to play with this
even though it is easily one of the best features we have coming for 9.6
until we release 9.6.0. That is a bad time to be testing.

The easier we make it for people like me, practitioners to test, the
better it is for the whole project.

Sincerely,

JD

--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#25Peter Geoghegan
pg@heroku.com
In reply to: Robert Haas (#20)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 10:45 AM, Robert Haas <robertmhaas@gmail.com> wrote:

And, by the way, the patch, aside from the deadlock.c portion, was
posted back in October, admittedly without much fanfare, but nobody
reviewed that or any other patch on this thread. If I'd waited for
those reviews to come in, parallel query would not be committed now,
nor probably in 9.6, nor probably in 9.7 or 9.8 either. The whole
project would just be impossible on its face. It would be impossible
in the first instance if I did not have a commit bit, because there is
just not enough committer bandwidth - even reviewer bandwidth more
generally - to review the number of patches that I've submitted
related to parallelism, so in the end some, perhaps many, of those are
going to be committed mostly on the strength of my personal opinion
that committing them is better than not committing them. I am going
to have a heck of a lot of egg on my face if it turns out that I've
been too aggressive in pushing this stuff into the tree. But,
basically, the alternative is that we don't get the feature, and I
think the feature is important enough to justify taking some risk.

FWIW, I appreciate your candor. However, I think that you could have
done a better job of making things easier for reviewers, even if that
might not have made an enormous difference. I suspect I would have not
been able to get UPSERT done as a non-committer if it wasn't for the
epic wiki page, that made it at least possible for someone to jump in.

To be more specific, I thought it was really hard to test parallel
sequential scan a few months ago, because there was so many threads
and so many dependencies. I appreciate that we now use git
format-patch patch series for complicated stuff these days, but it's
important to make it clear how everything fits together. That's
actually what I was thinking about when I said we need to be clear on
how things fit together from the CF app patch page, because there
doesn't seem to be a culture of being particular about that, having
good "annotations", etc.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#26Robert Haas
robertmhaas@gmail.com
In reply to: Joshua D. Drake (#24)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 2:36 PM, Joshua D. Drake <jd@commandprompt.com> wrote:

I have no problem running any test cases you wish on a branch in a loop for
the next week and reporting back any errors.

Where this gets tricky is the tooling itself. For me to be able to do so
(and others really) I need to be able to do this:

* Download (preferably a tarball but I can do a git pull)
* Exact instructions on how to set up the tests
* Exact instructions on how to run the tests
* Exact instructions on how to report the tests

If anyone takes the time to do that, I will take the time and resources to
run them.

Well, what I've done is push into the buildfarm code that will allow
us to do *the most exhaustive* testing that I know how to do in an
automated fashion. Which is to create a file that says this:

force_parallel_mode=regress
max_parallel_degree=2

And then run this: make check-world TEMP_CONFIG=/path/to/aforementioned/file

Now, that is not going to find bugs in the deadlock.c portion of the
group locking patch, but it's been wildly successful in finding bugs
in other parts of the parallelism code, and there might well be a few
more that we haven't found yet, which is why I'm hoping that we'll get
this procedure running regularly either on all buildfarm machines, or
on some subset of them, or on new animals that just do this.

Testing the deadlock.c changes is harder. I don't know of a good way
to do it in an automated fashion, which is why I also posted the test
code Amit devised which allows construction of manual test cases.
Constructing a manual test case is *hard* but doable. I think it
would be good to automate this and if somebody's got a good idea about
how to fuzz test it I think that would be *great*. But that's not
easy to do. We haven't had any testing at all of the deadlock
detector up until now, but somehow the deadlock detector itself has
been in the tree for a very long time...

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#27Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#20)
Re: a raft of parallelism-related bug fixes

Robert Haas wrote:

Oh: another thing that I would like to do is commit the isolation
tests I wrote for the deadlock detector a while back, which nobody has
reviewed either, though Tom and Alvaro seemed reasonably positive
about the concept. Right now, the deadlock.c part of this patch isn't
tested at all by any of our regression test suites, because NOTHING in
deadlock.c is tested by any of our regression test suites. You can
blow it up with dynamite and the regression tests are perfectly happy,
and that's pretty scary.

FWIW a couple of months back I thought you had already pushed that one
and was surprised to find that you hadn't. +1 from me on pushing it.
(I don't mean specifically the deadlock tests, but rather the
isolationtester changes that allowed you to have multiple blocked
backends.)

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#28Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#25)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 2:48 PM, Peter Geoghegan <pg@heroku.com> wrote:

FWIW, I appreciate your candor. However, I think that you could have
done a better job of making things easier for reviewers, even if that
might not have made an enormous difference. I suspect I would have not
been able to get UPSERT done as a non-committer if it wasn't for the
epic wiki page, that made it at least possible for someone to jump in.

I'm not going to argue with the proposition that it could have been
done better. Equally, I'm going to disclaim having the ability to
have done it better. I've been working on this for three years, and
most of the work that I've put into it has gone into tinkering with C
code that was not in any way user-testable. I've modified essentially
every major component of the system. We had a shared memory facility;
I built another one. We had background workers; I overhauled them. I
invented a message queueing system, and then layered a modified
version of the FE/BE protocol on top of that message queue, and then
later layered tuple-passing on top of that same message queue and then
invented a bespoke protocol that is used to handle typemod mapping.
We had a transaction system; I made substantial, invasive
modifications to it. I tinkered with the GUC subsystem, the combocid
system, and the system for loading loadable modules. Amit added read
functions to a whole class of nodes that never had them before and
together we overhauled core pieces of the executer machinery. Then I
hit the planner with hammer. Finally there's this patch, which
affects heavyweight locking and deadlock detection. I don't believe
that during the time I've been involved with this project anyone else
has ever attempted a project that required changing as many subsystems
as this one did - in some cases rather lightly, but in a number of
cases in pretty significant, invasive ways. No other project in
recent memory has been this invasive to my knowledge. Hot Standby
probably comes closest, but I think (admittedly being much closer to
this work than I was to that work) that this has its fingers in more
places. So, there may be a person who knows how to do all of that
work and get it done in a reasonable time frame and also knows how to
make sure that everybody has the opportunity to be as involved in the
process as they want to be and that there are no bugs or controversial
design decisions, but I am not that person. I am doing my best.

To be more specific, I thought it was really hard to test parallel
sequential scan a few months ago, because there was so many threads
and so many dependencies. I appreciate that we now use git
format-patch patch series for complicated stuff these days, but it's
important to make it clear how everything fits together. That's
actually what I was thinking about when I said we need to be clear on
how things fit together from the CF app patch page, because there
doesn't seem to be a culture of being particular about that, having
good "annotations", etc.

I agree that you had to be pretty deeply involved in that thread to
follow everything that was going on. But it's not entirely fair to
say that it was impossible for anyone else to get involved. Both
Amit and I, mostly Amit, posted directions at various times saying:
here is the sequence of patches that you currently need to apply as of
this time. There was not a heck of a lot of evidence that anyone was
doing that, though, though I think a few people did, and towards the
end things changed very quickly as I committed patches in the series.
We certainly knew what each other were doing and not because of some
hidden off-list collaboration that we kept secret from the community -
we do talk every week, but almost all of our correspondence on those
patches was on-list.

I think it's an inherent peril of complicated patch sets that people
who are not intimately involved in what is going on will have trouble
following just because it takes a lot of work. Is anybody here
following what is going on on the postgres_fdw join pushdown thread?
There's only one patch to apply there right now (though there have
been as many as four at times in the past) and the people who are
actually working on it can follow along, but I'm not a bit surprised
if other people feel lost. It's hard to think that the cause of that
is anything other than "it's hard to find the time to get invested in
a patch that other people are already working hard and apparently
diligently on, especially if you're not personally interested in
seeing that patch get committed, but sometimes even if you are". For
example, I really want the work Fabien and Andres are doing on the
checkpointer to get committed this release. I am reading the emails,
but I haven't tried the patches and I probably won't. I don't have
time to be that involved in every patch. I'm trusting that whatever
Andres commits - which will probably be a whole lot more complex than
what Fabien initially did - will be the right thing to commit.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Peter Geoghegan
pg@heroku.com
In reply to: Robert Haas (#28)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 12:18 PM, Robert Haas <robertmhaas@gmail.com> wrote:

So, there may be a person who knows how to do all of that
work and get it done in a reasonable time frame and also knows how to
make sure that everybody has the opportunity to be as involved in the
process as they want to be and that there are no bugs or controversial
design decisions, but I am not that person. I am doing my best.

To be more specific, I thought it was really hard to test parallel
sequential scan a few months ago, because there was so many threads
and so many dependencies. I appreciate that we now use git
format-patch patch series for complicated stuff these days, but it's
important to make it clear how everything fits together. That's
actually what I was thinking about when I said we need to be clear on
how things fit together from the CF app patch page, because there
doesn't seem to be a culture of being particular about that, having
good "annotations", etc.

I agree that you had to be pretty deeply involved in that thread to
follow everything that was going on. But it's not entirely fair to
say that it was impossible for anyone else to get involved.

All that I wanted to do was look at EXPLAIN ANALYZE output that showed
a parallel seq scan on my laptop, simply because I wanted to see a
cool thing happen. I had to complain about it [1]/messages/by-id/CAM3SWZSefE4uQk3r_3gwpfDWWtT3P51SceVsL4=g8v_mE2Abtg@mail.gmail.com to get clarification
from you [2]/messages/by-id/CA+TgmoartTF8eptBhiNwxUkfkctsFc7WtZFhGEGQywE8e2vCmg@mail.gmail.com -- Peter Geoghegan.

I accept that this might have been a somewhat isolated incident (that
I couldn't easily get *at least* a little instant gratification), but
it still should be avoided. You've accused me of burying the lead
plenty of times. Don't tell me that it was too hard to prominently
place those details somewhere where I or any other contributor could
reasonably expect to find them, like the CF app page, or a wiki page
that is maintained on an ongoing basis (and linked to at the start of
each thread). If I said that that was too much to you, you'd probably
shout at me. If I persisted, you wouldn't commit my patch, and for me
that probably means it's DOA.

I don't think I'm asking for much here.

[1]: /messages/by-id/CAM3SWZSefE4uQk3r_3gwpfDWWtT3P51SceVsL4=g8v_mE2Abtg@mail.gmail.com
[2]: /messages/by-id/CA+TgmoartTF8eptBhiNwxUkfkctsFc7WtZFhGEGQywE8e2vCmg@mail.gmail.com -- Peter Geoghegan
--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#30Joshua D. Drake
jd@commandprompt.com
In reply to: Peter Geoghegan (#29)
Re: a raft of parallelism-related bug fixes

On 02/08/2016 01:11 PM, Peter Geoghegan wrote:

On Mon, Feb 8, 2016 at 12:18 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I accept that this might have been a somewhat isolated incident (that
I couldn't easily get *at least* a little instant gratification), but
it still should be avoided. You've accused me of burying the lead
plenty of times. Don't tell me that it was too hard to prominently
place those details somewhere where I or any other contributor could
reasonably expect to find them, like the CF app page, or a wiki page
that is maintained on an ongoing basis (and linked to at the start of
each thread). If I said that that was too much to you, you'd probably
shout at me. If I persisted, you wouldn't commit my patch, and for me
that probably means it's DOA.

I don't think I'm asking for much here.

[1] /messages/by-id/CAM3SWZSefE4uQk3r_3gwpfDWWtT3P51SceVsL4=g8v_mE2Abtg@mail.gmail.com
[2] /messages/by-id/CA+TgmoartTF8eptBhiNwxUkfkctsFc7WtZFhGEGQywE8e2vCmg@mail.gmail.com

This part of the thread seems like something that should be a new thread
about how to write patches. I agree that patches that are large features
that are in depth discussed on a maintained wiki page would be awesome.
Creating that knowledge base without having to troll through code would
be priceless in value.

Sincerely,

JD

--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#31Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#29)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 4:11 PM, Peter Geoghegan <pg@heroku.com> wrote:

All that I wanted to do was look at EXPLAIN ANALYZE output that showed
a parallel seq scan on my laptop, simply because I wanted to see a
cool thing happen. I had to complain about it [1] to get clarification
from you [2].

I accept that this might have been a somewhat isolated incident (that
I couldn't easily get *at least* a little instant gratification), but
it still should be avoided. You've accused me of burying the lead
plenty of times. Don't tell me that it was too hard to prominently
place those details somewhere where I or any other contributor could
reasonably expect to find them, like the CF app page, or a wiki page
that is maintained on an ongoing basis (and linked to at the start of
each thread). If I said that that was too much to you, you'd probably
shout at me. If I persisted, you wouldn't commit my patch, and for me
that probably means it's DOA.

I don't think I'm asking for much here.

I don't think you are asking for too much; what I think is that Amit
and I were trying to do exactly the thing you asked for, and mostly
did. On March 20th, Amit posted version 11 of the sequential scan
patch, and included directions about the order in which to apply the
patches:

/messages/by-id/CAA4eK1JSSonzKSN=L-DWuCEWdLqkbMUjvfpE3fGW2tn2zPo2RQ@mail.gmail.com

On March 25th, Amit posted version 12 of the sequential scan patch,
and again included directions about which patches to apply:

/messages/by-id/CAA4eK1L50Y0Y1OGt_DH2eOUyQ-rQCnPvJBOon2PcGjq+1byi4w@mail.gmail.com

On March 27th, Amit posted version 13 of the sequential scan patch,
which did not include those directions:

/messages/by-id/CAA4eK1LFR8sR9viUpLPMKRqUVcRhEFDjSz1019rpwgjYftrXeQ@mail.gmail.com

While perhaps Amit might have included directions again, I think it's
pretty reasonable that he felt that it might not be entirely necessary
to do so given that he had already done it twice in the last week.
This was still the state of affairs when you asked your question on
April 20th. Two days after you asked that question, Amit posted
version 14 of the patch, and again included directions about what
patches to apply:

/messages/by-id/CAA4eK1JLv+2y1AwjhsQPFisKhBF7jWF_Nzirmzyno9uPBRCpGw@mail.gmail.com

Far from the negligence that you seem to be implying, I think Amit was
remarkably diligent about providing these kinds of updates. I
admittedly didn't duplicate those same updates on the parallel
mode/contexts thread to which you replied, but that's partly because I
would often whack around that patch first and then Amit would adjust
his patch to cope with my changes after the fact. That doesn't seem
to have been the case in this particular example, but if this is the
closest thing you can come up with to a process failure during the
development of parallel query, I'm not going to be sad about it: I'm
going to have a beer. Seriously: we worked really hard at this.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#32Peter Geoghegan
pg@heroku.com
In reply to: Robert Haas (#31)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 1:45 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Far from the negligence that you seem to be implying, I think Amit was
remarkably diligent about providing these kinds of updates.

I don't think I remotely implied negligence. That word has very severe
connotations (think "criminal negligence") that are far from what I
intended.

I admittedly didn't duplicate those same updates on the parallel
mode/contexts thread to which you replied, but that's partly because I
would often whack around that patch first and then Amit would adjust
his patch to cope with my changes after the fact. That doesn't seem
to have been the case in this particular example, but if this is the
closest thing you can come up with to a process failure during the
development of parallel query, I'm not going to be sad about it: I'm
going to have a beer. Seriously: we worked really hard at this.

I don't want to get stuck on that one example, which I acknowledged
might not be representative when I raised it. I'm not really talking
about parallel query in particular anyway. I'm mostly arguing for a
consistent way to get instructions on how to at least build the patch,
where that might be warranted.

The CF app is one way. Another good way is: As long as we're using a
patch series, be explicit about what goes where in the commit message.
Have message-id references. That sort of thing. I already try to do
that. That's all.

Thank you (and Amit) for working really hard on parallelism.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#33Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#32)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 4:54 PM, Peter Geoghegan <pg@heroku.com> wrote:

On Mon, Feb 8, 2016 at 1:45 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Far from the negligence that you seem to be implying, I think Amit was
remarkably diligent about providing these kinds of updates.

I don't think I remotely implied negligence. That word has very severe
connotations (think "criminal negligence") that are far from what I
intended.

OK, sorry, I think I misread your tone.

I don't want to get stuck on that one example, which I acknowledged
might not be representative when I raised it. I'm not really talking
about parallel query in particular anyway. I'm mostly arguing for a
consistent way to get instructions on how to at least build the patch,
where that might be warranted.

The CF app is one way. Another good way is: As long as we're using a
patch series, be explicit about what goes where in the commit message.
Have message-id references. That sort of thing. I already try to do
that. That's all.

Yeah, me too. Generally, although with some exceptions, my practice
is to keep reposting the whole patch stack, so that everything is in
one email. In this particular case, though, there were patches from
me and patches from Amit, so that was harder to do. I wasn't using
his patches to test my patches; I had other test code for that. He
was using my patches as a base for his patches, but linked to them
instead of reposting them. That's an unusually complicated scenario,
though: it's pretty rare around here to have two developers working
together on something as closely as Amit and I did on those patches.

Thank you (and Amit) for working really hard on parallelism.

Thanks.

By the way, it bears saying, or if I've said it before repeating, that
although most of the parallelism code that has been committed was
written by me, Amit has made an absolutely invaluable contribution to
parallel query, and it wouldn't be committed today or maybe ever
without that contribution. In addition to those parts of the code
that were committed as he wrote them, he prototyped quite a number of
things that I ended up rewriting, reviewed a ton of code that I wrote
and found bugs in it, wrote numerous bits and pieces of test code, and
generally put up with an absolutely insane level of me nitpicking his
work, breaking it by committing pieces of it or committing different
pieces that replaced pieces he had, demanding repeated rebases on
short time scales, and generally beating him up in just about every
conceivable way. I am deeply appreciative of him being willing to
jump into this project, do a ton of work, and put up with me.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#34Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#20)
Re: a raft of parallelism-related bug fixes

Hi Robert,

On 2016-02-08 13:45:37 -0500, Robert Haas wrote:

I realize that this stuff has all been brewing long, and that there's
still a lot to do. So you gotta keep moving. And I'm not sure that
there's anything wrong or if there's any actually better approach. But
pushing an unreviewed, complex patch, that originated in a thread
orginally about different relatively small/mundane items, for a
contentious issue, a few days after the initial post. Hm. Not sure how
you'd react if you weren't the author.

Probably not very well. Do you want me to revert it?

No. I want(ed) to express that I am not comfortable with how this got
in. My aim wasn't to generate a flurry of responses with everybody
piling on, or anything like that. But it's unfortunately hard to
avoid. I wish I knew a way, besides only sending private mails. Which I
don't think is a great approach either.

I do agree that we need something to tackle this problem, and that this
quite possibly is the least bad way to do this. And certainly the only
one that's been implemented and posted with any degree of completeness.

But even given the last paragraph, posting a complex new patch in a
somewhat related thread, and then pushing it 5 days later is pretty darn
quick.

I mean, look. [explanation why we need the infrastructure]. Do I really
think anybody was going to spend the time to understand deadlock.c
well enough to verify my changes? No, I don't. What I think would
have happened is that the patch would have sat around like an
albatross around my neck - totally ignored by everyone - until the end
of the last CF, and then the discussion would have gone one of three
ways:

Yes, believe me, I really get that. It's awfully hard to get substantial
review for pieces of code that require a lot of context.

But I think posting this patch in a new thread, posting a message that
you're intending to commit unless somebody protests with a substantial
arguments and/or a timeline of review, and then waiting a few days, are
something that should be done for a major piece of new infrastructure,
especially when it's somewhat controversial.

This doesn't just affect parallel execution, it affects one of least
understood parts of postgres code. And where hard to find bugs, likely
to only trigger in production, are to be expected.

And, by the way, the patch, aside from the deadlock.c portion, was
posted back in October, admittedly without much fanfare, but nobody
reviewed that or any other patch on this thread.

I think it's unrealistic to expect random patches without a commitest
entry, posted somewhere deep in a thread, to get a review when there's
so many open commitfest entries that haven't gotten feedback, and which
we are supposed to look at.

If I'd waited for those reviews to come in, parallel query would not
be committed now, nor probably in 9.6, nor probably in 9.7 or 9.8
either. The whole project would just be impossible on its face.

Yes, that's a problem. But you're not the only one facing it, and you've
argued hard against such an approach in some other cases.

I think it's myopic to say "well, but this patch might have bugs".
Very true. But also, all the other parallelism patches that are
already committed or that are still under review but which can't be
properly tested without this patch might have bugs, too, so you've got
to weigh the risk that this patch might get better if I wait longer to
commit it against the possibility that not having committed it reduces
the chances of finding bugs elsewhere. I don't want it to seem like
I'm forcing this down the community's throat - I don't have a right to
do that, and I will certainly revert this patch if that is the
consensus. But that is not what I think best myself. What I think
would be better is to (1) make an effort to get the buildfarm testing
which this patch enables up and running as soon as possible and (2)
for somebody to read over the committed code and raise any issues that
they find. Or, for that matter, to read over the committed code for
any of the *other* parallelism patches and raise any issues that they
find with *that* code. There's certainly scads of code here and this
is far from the only bit that might have bugs.

I think you are, and *you have to*, walk a very thin line here. I agree
that realistically there's just nobody with the bandwidth to keep up
with a fully loaded Robert. Not without ignoring their own stuff at
least. And I think the importance of what you're building means we need
to be flexible. But I think that thin line in turn means that you have
to be *doubly* careful about communication. I.e. post new infrastructure
to new threads, "warn" that you're intending to commit something
potentially needing debate/review, etc.

Oh: another thing that I would like to do is commit the isolation
tests I wrote for the deadlock detector a while back, which nobody has
reviewed either, though Tom and Alvaro seemed reasonably positive
about the concept.

I think adding new regression tests should have a barrier to commit
that's about two magnitudes lower than something like group locks. I
mean the worst that they could so is to flap around for some reason, or
take a bit too long. So please please go ahead.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#28)
Re: a raft of parallelism-related bug fixes

On 2016-02-08 15:18:13 -0500, Robert Haas wrote:

I agree that you had to be pretty deeply involved in that thread to
follow everything that was going on. But it's not entirely fair to
say that it was impossible for anyone else to get involved. Both
Amit and I, mostly Amit, posted directions at various times saying:
here is the sequence of patches that you currently need to apply as of
this time. There was not a heck of a lot of evidence that anyone was
doing that, though, though I think a few people did, and towards the
end things changed very quickly as I committed patches in the series.
We certainly knew what each other were doing and not because of some
hidden off-list collaboration that we kept secret from the community -
we do talk every week, but almost all of our correspondence on those
patches was on-list.

I think having a public git tree, that contains the current state, is
greatly helpful for that. Just announce that you're going to screw
wildly with history, and that you're not going to be terribly careful
about commit messages. That means observers can just do a fetch and a
reset --hard to see the absolutely latest and greatest. By all means
post a series to the list every now and then, but I think for minor
changes it's perfectly sane to say 'pull to see the fixups for the
issues you noticed'.

I think it's an inherent peril of complicated patch sets that people
who are not intimately involved in what is going on will have trouble
following just because it takes a lot of work.

True. But it becomes doubly hard if there's no up-to-date high level
design overview somewhere outside $sizeable_brain. I know it sucks to
write these, believe me. Especially because one definitely feels that
nobody is reading those.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#36Peter Geoghegan
pg@heroku.com
In reply to: Andres Freund (#35)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 2:35 PM, Andres Freund <andres@anarazel.de> wrote:

I think having a public git tree, that contains the current state, is
greatly helpful for that. Just announce that you're going to screw
wildly with history, and that you're not going to be terribly careful
about commit messages. That means observers can just do a fetch and a
reset --hard to see the absolutely latest and greatest. By all means
post a series to the list every now and then, but I think for minor
changes it's perfectly sane to say 'pull to see the fixups for the
issues you noticed'.

I would really like for there to be a way to do that more often. It
would be a significant time saver, because it removes problems with
minor bitrot.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#37Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#34)
Re: a raft of parallelism-related bug fixes

On Mon, Feb 8, 2016 at 5:27 PM, Andres Freund <andres@anarazel.de> wrote:

contentious issue, a few days after the initial post. Hm. Not sure how
you'd react if you weren't the author.

Probably not very well. Do you want me to revert it?

No. I want(ed) to express that I am not comfortable with how this got
in. My aim wasn't to generate a flurry of responses with everybody
piling on, or anything like that. But it's unfortunately hard to
avoid. I wish I knew a way, besides only sending private mails. Which I
don't think is a great approach either.

I do agree that we need something to tackle this problem, and that this
quite possibly is the least bad way to do this. And certainly the only
one that's been implemented and posted with any degree of completeness.

But even given the last paragraph, posting a complex new patch in a
somewhat related thread, and then pushing it 5 days later is pretty darn
quick.

Sorry. I understand your discomfort, and you're probably right. I'll
try to handle it better next time. I think my frustration with the
process got the better of me a little bit here. This patch may very
well not be perfect, but it's sure as heck better than doing nothing,
and if I'd gone out of my way to say "hey, everybody, here's a patch
that you might want to object to" I'm sure I could have found some
volunteers to do just that. But, you know, that's not really what I
want. What I want is somebody to do a detailed review and help me fix
whatever the problems the patch may have. And ideally, I'd like that
person to understand that you can't have parallel query without doing
something in this area - which I think you do, but certainly not
everybody probably did - and that a lot of simplistic, non-invasive
ideas for how to handle this are going to be utterly inadequate in
complex cases. Unless you or Noah want to take a hand, I don't expect
to get that sort of review. Now, that having been said, I think your
frustration with the way I handled it is somewhat justified, and since
you are not arguing for a revert I'm not sure what I can do except try
not to let my frustration get in the way next time. Which I will try
to do.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#38Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#37)
Re: a raft of parallelism-related bug fixes

Hi!

Thanks for the answer. Sounds good.

On 2016-02-08 18:47:18 -0500, Robert Haas wrote:

and if I'd gone out of my way to say "hey, everybody, here's a patch
that you might want to object to" I'm sure I could have found some
volunteers to do just that. But, you know, that's not really what I
want.

Sometimes I wonder if three shooting-from-the-hip answers shouldn't cost
a jog around the block or such (of which I'm sometimes guilty as
well!). Wouldn't just help the on-list volume, but also our collective
health ;)

Unless you or Noah want to take a hand, I don't expect to get that
sort of review. Now, that having been said, I think your frustration
with the way I handled it is somewhat justified, and since you are not
arguing for a revert I'm not sure what I can do except try not to let
my frustration get in the way next time. Which I will try to do.

FWIW, I do hope to put more time into reviewing parallelism stuff in the
coming weeks. It's hard to balance all that one likes to do.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39Noah Misch
noah@leadboat.com
In reply to: Robert Haas (#26)
1 attachment(s)
postgres_fdw vs. force_parallel_mode on ppc

On Mon, Feb 08, 2016 at 02:49:27PM -0500, Robert Haas wrote:

Well, what I've done is push into the buildfarm code that will allow
us to do *the most exhaustive* testing that I know how to do in an
automated fashion. Which is to create a file that says this:

force_parallel_mode=regress
max_parallel_degree=2

And then run this: make check-world TEMP_CONFIG=/path/to/aforementioned/file

Now, that is not going to find bugs in the deadlock.c portion of the
group locking patch, but it's been wildly successful in finding bugs
in other parts of the parallelism code, and there might well be a few
more that we haven't found yet, which is why I'm hoping that we'll get
this procedure running regularly either on all buildfarm machines, or
on some subset of them, or on new animals that just do this.

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Attachments:

contrib-install-check-C.log.gzapplication/x-gunzipDownload
��6�V�]is�F�������v,
���U������L����J�T Ix&	�����n�K_�0�����x�����M�ZY�t�
Y����X�g����Z.���|�����N��:t|w}������?���o���W�W��:�_��|��|�agy�����O7��_��|�.���}w~�!���{�
zh����v�����"��I~�[����LT���r�,����"�:����I*��t�7K�]Q��TH���y�;�����!����)�/����	�S���� @���d3�!?
�*L�F�d��!�!bh���Y����Sg�@�v���8�o{')��h�	�q��U&i7T���(��V�yk��j���h�8�����h�r���(Q���/>���������b�&tg����0��Sz�1B�s�{��n������3K41Er��eM�KGB��������g���� ZQ���
�-����[��h;z��+tX����he��b��N�l�n��}�n-���|��B�Bs7:���������m�d���
hDD��ok���))+���Y�Z��f5�Zs+p��]5��?�����W�S$����h����wJ-��F�^��o�������t^�i��v��������<����A4�^3�<4b����M�'�*�%����L&���^D;�_�"U�"�����
\D�\w�=����S>@D.��e�EBp� ���!)P����YO�p����}p��-��}�Z��{E��X��]��H����E���UFo6����j4ti&k��� ���|N������$?�,�	�,l�1��%�����������{�&_:m���)~v��Y�!�|��|7���,?��c�?f��������j*�}��@1�o>3&�O�C���D%�7���WA3����y��-:����t"������$#�p���`���A�w�9Jq��p�ko�N��NAFW=`t��EGW/��_�2��S��~}�W7����Y3�o��K���/�u����0����?����K7_n��f�.�O$�}<H�z���Z:��>� 8����������Z��WK�n�p���]�������h+c�0��8�/9^*���i@�6�>�k*�,����Z
��mt�>xXc�i���F�[t����EB�m
�����������4�5��}�9�/��c����6�����N��6:�����-::����\� (���*1��v�,(����
f|�.����]��@
4��k�y���cP��L����?�ji�����|��`v�5�75��������� �WVh��o�E�Cx`�C���M@���������c�3����->���m��'j����3���$
��=�P�XwZ���;{sN�IZ�?�o��@cz3�����-:�7E7����7o�����i�{���'5uku��o������?z�����-,�`��3)JK#�>�����
�{��}�6^4�����/�CX�HYD�f?�������F7�.c���K��c�^H_"��	L��>S>�����-��$�b����Yg���S�CHi�\X5l�k��nh��k��u�Q��G�y�"��F
h�Q'5BQ�\_��WW�o[�L��R�;�z�0J*���J��lgem���"��TM�RR7��������@j��&����b�0De$PR%6ye����r�a0P�!�r���J
��A����I���0����P�i�"����� �\|�>���Dq���LZ��Z������6�7���@����0z�������,�a7D�������e���h#:4_zwWnp�|����Y"v��h�"���4'�P�$���X�x+���k6��Z�i����NE��:��vh�lR�d������P�>9�~�r�W� IK\��������-���k�����(���.�z���x�J]5���.��n�����	�W�]U������l����S���8�U����W�(W�=����LIv]TT��������Z��Ij��s����s��������TG�X�L�c���&���n7�!�$���y������E�yM9�����=�t Ck=#(D�wRSp=3o�oi=�c��W����|E�1c��s������3DP�k��B���";\"Pl�j"���7o�_�k#yWY:pe���k#7tH���FgmSV������4��=`�$q��g���5�[�����H7���>���w��U5a��'�^"��<�2Q����je�Qq}��(p�#k��:�
P`�f�P��m�drT�LH���d�y*��6��nb�-��	+�1I��������K�D��O?��H�JI���Ss?���GO��[?pJ������|��;����z8c�������E�(�
?���g����9U��� Y�����	z��l�a�X���>U��y�_T�.�/cBt�,��������:|������
etwC���C�@�$�k��n���i����6�A�%���������I��(�,L��WI�w��������r�,}�v��..���.����	����K����Q�-*�+_�E�&�v�.-��PZ���[T�������hZ��M�����Wl��M�c�	�]�K��<�����?>z������M�����B1$���hZ��K��B���m�f����JSAC�~s����a��OM�qW	L��u����&�'�|�L��)6'����.^�?4�����)I�3���~-�XA������f��w�FV�ja���|8��\89�{�]����+g5��|�[���ih�{!�-���E{���C�����.�Y-��$/��
�[��nWhim�6�q(-?�h�$W� ��lq��mgC���^'����D1�
�`���m�(y������b�n� ����lk�'X����rn<���Bo��E�7�f����5\8UV8%�-7��T�%dw�L�4��I&{	�/���
:��$I�Z�l���K����e��=a;E�o��_���o>]|�~-0g?ry����
�6kN*����zm�(���N���5 p?������/g�4#�8����3����F�8���Bu��e��D��-K�hF����;T��m�]���P1�����_��k�O�hi_����V��t����vA���o��YoW����b��>��Z�Kk>��[�����se�d��g��t��j�b��<�;3��Q"�I��,06J�,vh�����?���e��mK���mu�S���������)K@�����$�9^�N������W`[��wu�s��*�k���=��pg?:/��G?���?�"���!���y������FE'�������BOO�g/��`�"���;;���P����y���w�.�cd:>r�}z��W���S�TT��`~:{��=G�|����?�:�x����4f��5���/rT*)i�:��e�l+�j@K�a��q������&4~��L��~�U��]�sj�M�;��Y`<�3
@����,,�j.�* �G�����x���EV�0��L�sLqV+����2-�QI��0,�*�l+�jBU�@������������j��L
��j�fT]�,��c�&�X���G`�0k*�#*���cM�E���0��Z`R`��� -Pg����fsXaTY���.5�=��
��[�
��P/>�X�%�Gt]�`�1x0�u��U���b|���,�w@����2D�N�e���d8�N�8��b��\A)N�
b�r�7dMh<�V�3qhE�h�
tdugQY.C�5��u��n�@#��<���Y!\���]�Y!<�<+D(t�g8ry&Ey�sd��HQ,qyn�%+�������Y(SVL
�m"��&��3*C���B�5���c��n��@��H�I]����h��[a7��A�k�|!�$����zWy��%x0y�$��U���b��nZxH�9��Y�IH�c�T��O�c*��'8z���x;>��4@CH�c�L�{*�a��U:FJ��u:M����
�y�Z�h�t#d
�nQ�.A�5��u��nm�x(k:�gN����02�J��(tUi ��J{�"�h�*�.C������\��_<U��NH�l��l[@�u�+�j�pRET��������&�I�c�DAuh���A�1

�Rc�3aT�]Q�-�
�@fH�g�zOzk(��YrW�5�wU,���������;���_�Z��z�Z�^��$�����k�PY��7���>E��$�'X:�u$)S�='t�����Tw��v�$�N�;Nb);y�T'P+C^'�=�GL���.h9Z���jn:-�'8��('����;)"������	5��	�H�ugE���0V�-���04����c��w(��+�D���&Gnhb�����_@##��632
�Y\+����J���1;#:�����54Fw����6����c,Dmlk9���eJPWjlm9��Ga&K3���c&A����������_^����;�����g���L���[��@C3
�#�g%#��W���>U7��#	��Y[�I�"�B%������vr�����o���@�5�W'R����l�]���Q�1~MvS|��9�
t��2D�����������7���?�@�OTn��t}���~i:�d�����&�~���������k�q7��Sc������X��X�*����:a?���m`�II�~b���&R6��Y{"
P!f"xT����IjC��
��M��8�hMD�H����Td�j*���]ME�~��LE��AWS��
-Q:�`����X��J��]S8�R*���[��38������/��mD�JM#��Y�|HA��p8�XTb>{d�b��I�Y�������@+|hQk(8k���9��A��3��>��0�<�Pd�w0+P����(
�~���o�e�Do�Z���}�7F;���}T8.O�q����%(*�)����x�M9�=���2�i�b�����i��h�dA ����8k��$��G+*G�����a�j�����sHW�Pk�3
<�e��P��pd�g(,�Z,��)�x�����9��s	Xi&!����S�W��)>���|�O	M������Hj!{�#��U�d�G9�|�Q���4#�OST��Y7:��1[���F�@9�Z�9��CCW���
�nt���kqJQ������4`[��������YtC����V�������}O��2QOqaU�S9�4�1�s��#�1�]���Li��1�`�����VKRSL�	���������9�� -��0��Ai6g�������%gp���u=��J�B�1hh&��t�7��z?0�+��f
���t�0��%�n6Aa��%X��9B��q�����f5�Y��7��/���G�H�?��CV�����T������!o(�NZ���k��n6Go��BT���g��
����}	�1:��M����0HmX�s	k���&���;�����(	��J�E��O��	*=�������P�_��,+����un'�����!z��r�����K�Lv��l��|H���N&���{�+��NKZ@�1
��i6����R:����g���	
n*�Y�q"�l*�rO�kj*w.�YMM&��F
]���E%����+���=P�MC���8{<16
���qJ(�t_@�Mb����b�ty:�(���D�d����5%��ae�"�_�z��vW�5f�b�n��)X2#��j�,"��;���V{� �;��k��2�$]�=w�\>���	�~��V{� [@)+y���n�*��^�����)����v�	�V{ �]�(��������V\����5B��Ah6Gn\Y��G�����4��jGI�� W�s��7k=�r������U����gq�������V����B�����f��H����apMH�#�D��mP���"�E?�y��}CD�c6���{�/�%I�g�b�J��p�#�����6��f�:P��J\o�m� *�%�����l���CM�a+��Xi������m��(�Q��Je�6c��	��b���M����S������q��s�)�$%�]���=��R�k��y��Clt>�D�����
EtZ��jL�L�%-(�Q���:�wIs�YCh�_���G��~���|ii�n�^����=+�+-������`�^�=��odu�Y&3�Yn��nwU�G_�e�%��~2�Td��B�'C?L����=Z�~&�@�W�1�y���B����qoV�A�8��
M�n�����/d�K@������p����r����z�kF�q��fx�`���x���#�����.!�9�S�%1�z�������?�)����t�,�����k
�G8���b��=N��7*���F�#���m���c������������z�#���*�G���}���/sU���Y���}(���Gk5;�����7��$5k�j����NC���,����*|��Dx f>�V���kb�'�j�����jmX��Iq���G������ b?@���bu����������o��|������wo�_3���_��������3�|pm���s�x<�:��4wDS�q��H����������#����k�0%��^���"��&�����(�B�
��{������1B���Wq
�\��;,����OG�������_�g�/w��bf��?�����eQ�FQ���9�G���	G�CU��?�W}z�f�U���~|�����s5������42�����'��������>���P.�xOE����}�����+���S+.M�)���zX+������!��V\����.�|��;�{��Q>��@�v?�$��!?zh����a��;������`Dh����������������/�����aO��b���T�@�|��	��F�&�._�pBss�X
���s�h�+a�T�g�%'�LR�v�g}�>`w���x8w��o^��SI���y��~��o���2��6V���%�I����]���e��=�I��P]��$��Fj�d�����-Sc�Q#ue�L�*N�4Vw����p�y��7�q����~��?�"���)g�=�U��)�i-!��2[*�HSj�T����-.N!�l��lQ�U�������5x��)t��"�N���k��%�����0$��#���\���R�H����2[���<�lA��A�M7[0�tB�l�<�'��f'0K�������b�N2[@�-
�Py3e2L�ZB����1	����2M�@5t!�5S�$�fkq
��k���	�EWj�4�x���-�{.�l���rDm�q3��f�������S�B�E��R	����m��B��� ���#����Ld���n.2�E����\��Ld�	�M2gn��AI@g���%�a	��t�A�,�N��8#��(5ptdXN�����%��4�$��Un��v�U��]N�I������<%���o�������?�����Y�,#x�X��E�|<�����0j������_������V	�x
>fm�V�����]cOc8�g>��s��8�-���!-�7��@:BL~>�C�q"���0��N��_�Qe����wC�a�=|,Ya� �<������D#�:I��b�$5"X@����	l��^@�#���Y@�#���]@�#��ZIn�`{b�.�z�����g����9'�<���W����Z9?(��U�>n��L+���������rY�?ox<x��#�N+��j�p�S����D����'���c���W��1�|��1L��y(�DBs��.������?G�O��st����v�z\�z9�ddXB��-����DKG�u�P#7K�v�Q#�K�w�S#wK�8
.5�[���������84��������):6�>.tY�nW��������f��������}��v?�d�����7�_�%2}���m�� I����|��)v��K��.��!��wR���TK�����G��#�����Q%'D�g�>d��h'I��������[�Ew����n���B�����WJb�������i�Y=�(�R�GG�e�����N�k�T�{��j����YF�itG��.��4�����s]����r�e�����+�n�fj3O�������)-�E���y<����*NA�9%2���������/���S���4�R]�o�>?S��-�����~7��J�-��!}?��8��zb�D��^��,����$�g"x��z&��DE�L$O��D�$A3�AR��t ~�]�t��x��9�&Na����>���_�����D�AHtg�b��5�Q����E�����8f�>�siX���pG�D`p��H����N"1���H��`�H��NyFI<L#a9Qx�i��Qe)�Q�p�:��}-y,�X��-�#����`��JO����FG�UJ�fs�x����D�U��8!I��Y�d��������qB��!�J%'?�Stf�f�F"��=�?�����i?���A[��^�5�v�W{Gp�`{?
���'b2������r����(��t�%���6'��|��F�����M��(jU�R6�l6��
����2�_��aC��9����6�����:�/(:r��L��W
�#��J&�JWs�u�R&�Hj������2IGY���%��\�)�5k2I��B_�(�t,)�5�2I�QB�kVe����.3[�t�T��;���}��R���h<~��i|��j8�R���F"4�j���M?�$���^��p#��4���n���]��p3�o�nU����Q�[�Umc7�@Z�U=d7/�]X�U
e7��)��������m�V��9���K���n�\ozU���Q&G�Ui7�fW�����=�^���^5��$uZ�&��lj,���	O���D��MQ�q�X�{�+{9�.����U]\�8y���A?k��$�k!?�2����}<tQ�\\��y�;����^v#�:8Os��A�����wsAY��kO�~.�C�$���.�p�Cb%�]�q�~I`XyOKZ�
��|������W��:a�u7R�a�>��P3�k�:�,��$6U��]D�)l��|�E��������e�M2��`bi2���\��@�z�"Y��'��{"��,��r�y�����Zq�>������c��SDt�7������v���~��O�����R��tT�L��������������A�L|)pW
n7`6�����?7����-����S��X>?JQ��IF	>,@P�'��	�r�r6A1C"�$H�����$8C#3�|�3D2���Op�Hh�@4�	�����9C$0����D�	MPS��$r�H���w9C$4AC����"�A0��qqEG���^q����������~L�����m�4�/����|��	��`��{��o��>�[�(��b��f��f/��7+�g����:��^2�$��|�0��[�}���${��o�ZI�VL��2eU�&�����/n��}5����p�a:}uk�&���T�A�|�z����|�M���K�0o��&����o��&��������7�}=���S�����mkLR�t�M�sO,_�l�"z�U����3_N�ow�CYCp=����+X���:�	�L�!��y�4��:p�i���3;(����'p�l�m.����qY������>d�*�$�g�$x���3����.YD��?~0y�4���[z�}���#���-m�$B�B��l�����W��H8	�����������]H�IxI��-$�$<]U������O9����A	F���M���jk���s��0iOy��1|�g���	~%
�&�LI=�a�j������<�}��q\\4����~��I���Ky�%A�Q�N� ���J(�)$�C_B�a
E��|��0e1hJ��G_�\������;��n4%�qky�������,�1�R-�����|1��~�Y����|]e���1���16��sc���l�������(��\�}��2|c��a����%8>Gk��c(u���v���:d���j���xxa��o�.c� `o*;����"|�����C�4��7 �+W7��7����00�s��-a���@P��0`SB��F�kS(J&�n�0e!��(��w�=A�yq"���&G/=b"�Z\a�����V��z�\��q!��71��n��'K�0��o���J���q��"���eQ����%���-s���kB'w��66\o��p;>l�X�W.�(yL~��j���[a���F����7��M�!��]���s�=����:�O��vQ^=+���.�O8���N�����^x���?��%��l�m.6p���f��Z�~.�C6v���K���Nx�LK.������2|������LZ�dh��z�~Ox�S���=XPDI
�*�S���ZJRPTfAI%)U��]PXI
���{
n���u,r<�
���D�[Yy���Q�����_�����qu��|Q�.�.WH|HE�C�K�g+�0��S"��Q���Ib��]���l�m&������r�-5�n.�C.��q�'h?:7�����=��������3PQ���r��l��/�;c���O��"��SQ��&�@/'�$C��YNSI��''w�g
�j����{<R��:�{;c��)�?x(���z�$j(��x��>�X����z��DbL�[!��?�m?�������9�q�&��f#nG����Edc��;'N��s!� =1��i4������2�P�$�Q���_�}�r ZS--�����#�����\I`M���#	L	R�ry$�5bW.���T��j��l��h����s���kj_�������P��	
�Fk#D�������7�a���������[�����4U��
d�5��`T!����
��02���M`������L[�m�D����q�[mZa������
`���Y�w��QC�#7�c��A��85�\��i�\g��`��<� ��C�T�AHN��XS��S��,g��U�@��k���V54�D��2��.4U�C��b���J{`]����	i���L(�&������5����������X��U�Z�J�z+�k;�*��������J9�
���h��)L�R7Lv8�A7*(f�j�*4�{��t���{k�F�P���T��0��!��7���z'���K���q�)
�i+/���&8^�����>e)6��Be,�������k����������3`[ou�YTC��:��;����:&��k�������.�����Ck����pJ�������X\*&T]�:�Y�r\]���V5��������u�5���Y`�������ou����[\���u�~�Ye�K�'[s��S��P)B������V�J����bX'PvR��U���5pY:�s���������%xUuhpmk�����q\������q�B����*\�m���@a���I�I��X��^8`Ld��hq��L����5�v85��(���"�?��*p\B��h�x#�s��1y�������p+�L�����5�"�g��A��u�5"��#m�g�6��8h]��n+��`�0�~q+P�FifP]BY9.��U�,f]�F��x�D�+��������
���'S
�_X(Y	X���;H�������W�����K�au��~T��Z�,�^Y����ZV
(�0.'nX���E!gT�u��
��e[h
`�+���;k,�8K^+�
����U%p��[N[o��`��)��T��s�4��o�p���~	���x�\���D���{�Y\`Z�^�_E��W����Z\l�k�P�d���Z�������]n�X�����u�,���;��KWQ����D����*�%V(R)_�?�0�*�b���BR���-$������3J`�)�C��|��yB��p��P��H�}�ww����/+����&��\c`�0�}�6�)��?�C�0�8gL�U�Y� �$TS�X�cq1d�A��:?�pE�1D��q
�1������!�Pg���4�"c��"a�AQ�Q?�������
i��$�P�~�"��@�1TP���#�b���@�A�1��#���1�l���C	}+���a�#h#
}7�2�f����gAE�����Ca�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<�<���o����{��&/��=O�h�D��}�0����t8�j�(0�����sz�^_^�����7������u_�������9~����_�W]���n�}�{��6X��D|�oTec��;`�C���`4�O'��?M'-6�g3���G�q���D�=���=���a���A8���mx�<E7I|�do���I���fsv7�������p�G1E���(*7��r�V�X����x�R1l#������n���
�Z�"<��U0�����^��:�g]��&b�B�)L���l6����Yd]���Vi�]�N �.b��g�,�+K��O
�x^������������E~��1����6n���@x��U����X%��6�W���"���T
���������!C
>����w��?��O�{� ��a�a�6����\�JL[��^�� ���/���y�712���0X;a���0�����W�CX�����b)���!~`!�����P����!%=H�9�� �-H�]�v�7����)����F��o+�[6���3up�@��:�`�����'�30��vc��(l6B��|��n3$��}��u�9V]�[�t����}'�[������m4��}g��X�2���GR��������qdtL^��T��&y�����A��K�f�f8Ga v��5n��Sl���6A�/�P��E_�q��m��H2xx��4-o��md�]��m�u�A�]���t���!�2�>d��Ef&��de�[��,�����0}d2;-E����k�Y���Xy{��U����P�Z���l"��dD��>�@[H�So�$�!�v�Q��$"��8�� Q��[�LV���E
mq���%�N�I#:iD'����4��Ft��d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d���|�o����ro���r�c�$���M4����~8iO�������o�:�����u��]�y������Y�ut����9K��?���	J��5��P�E����� d���p0����I��������|������x���G�e��Y�����x0�{6� k��
��'��&�o���7�<�����l����y������(�rT@1n-'fE�%q?}�-�v�]��n�8��&���������U0�����^��:�g]!-�� �XV���/�B����G�0���u����[v��8���2q�S5�.����d~�c3O�����t����b�5�a���g���C����(f]f1O�M�	Y_�
���Bf*�d���%����8N��1��>�8�~4F����bA����������;^H���A43�FV����o������-���H���JX�#t�o����"t�o������_�����=�������n��5R'�GN�G����9�R�g���:<��� ���6���9��~��y��-8�6y��d��e���&��y+�e����5{=���G�V������ ��.`���,����VE�1�}}t���bQ���RF��,���W�x����,������
�BW�G���v�K�V��bZ���3��*�ab����)eW	
�0���4+��r����t+���!���Kb�v_������,z��~��8Y�����q[l���/)Hfy�5y���Q?���GHe��^��4`k��:0��F���yw'�m�������m ��{�57�D�����#����uP�%���e)���`=�85`_��X>f���
���Z�"�q^4�j`���4G���*�z�TPRTK���8����^�p��zXy2j�'�!�����M(b���Y�,#�GUt�fEBMC�Y�PD�C�U�B���%�n�FM{Bt�
�������R����`=t���	6�X�cz{l��@%���&�kb���&��&���%���`��e�K��-s_�
�>���/�TTV,{_�m�V),g�
�&V-w�
�hfy�l-����6=��`?�mX��kN�����z:�G������:'���]^�t���GV�%)H�d`�;Lv��k���5�hoJ��s��m�	�>b�������-�K�Yd�)��Y�SE�v���<��o&1;��;�D����w�{���������w�������������S����7y� �����,��}�����u��_z�K�<�&��Kn���!3���%�m<�0������q(��16�������9{��>l<a�[��e`Y($6�~eH����#�2b�
��6��/��+(�[�vAr� �� ��H\�1N&��(�����j�����b�u:�A?����\���UV
���>�^�C/�������Iw#]�7�?������w�����V�_����a%(�Al��!k|��$lT�w�F�Sh��5�{uuy���pM�hr�����O��1f�F�10���s���=�p6O��d�^�1�7�,��W��JJ�I��9=�r��h�3���%���h~� ����N@��*%(��o��y���b{�����8U��rg_��kz[���_�ws����n�����b4�(F+*B'W�o�I
g���7P
O���(��h���jW�\�R���6�����q��3K����'K�M�)��I�����C������v�|����b�*Sj���=�@fk�?2l��a������U���o-�W�E��9��Yi���k������:3��������$���}��_���(�w��~n�(J�O�P�c6'�0|�����"�bI����R����7?��~�|��ji�y�a�>�^R����^\���;3\<�A�[�l17�*�io<�\3�-�q�q:�p�J8���H����%���mxS�W��z@��X�n!�1$�P�������+���I����"n�t�!��.)��d)"=�<�'AR,���q=$�� n�Dr+��{&�Z1��d�aYw�lR��F�Vv�f}��,�k��q��*;R���V�q1��j3�*��2�#���
���F�C3�)L�wa"
:ck�t�e����r�&�I�]-��(	�Z�)@9H6�n��S�r���������������s3�\��
t`�+=�Z���?<epX��)|��iUF=�U"���I�5����o��_�I<��M�R#q�1k��O� y��B���v��.n���>�_2	a��G�������G�k�#r��;2�Jl��H1\����=d��h�/Q���g�8"��|�xa���	J
�����d:"0��_7!Pv{�g���g��U�����}�:�������e�*�_
�Y��y�K��j��wPx������7�����GUsu�l��0������d�<^�qn��/*BZ���R/�a���$_��b����m����F����Nu��'�C&�SwR�j��1Z�N�W�q����c`�r\sZ�b�H�Y�8�m��>���@tt����L3k{�F�`W���?�$"��E#}���X<�x;���"#.i�q����|��_�"������R�Dr�.���3P�������H(���e:|\��F�%��5��Sq~
_XH�e}O��d]�p��������y!['������+���������&���5�����)���x�}������2��By~z�����aL�&�P�-_WR�;V(0�E�-���SU��|�G�A��|�1�������Gn.{�l��#U������l����r4VA���X5c�k�$61��N������q�Y�tg���Wy��3�����
�����qT��w��p#d��v� z� ��]�
l��s��� �l"��ue
R�9;���?�?4��
��4�/.{��.�]��N]|��������W���Y����=
���
��A��^w�B�^+�5��p����kZ>���5�M]�;5�-M�;�#K�@��]����u���?�G?6��:uA;���%n4�����[������V0�yL��8ma�Zh��d��c�9H��&��\v��^/�8��Q�K|'���LqS�Y�
d�y���[�_�G�K4�����U�&C,�;	����1+x����sc��"���z�����J����IvJ�>�������#.u�CO"K+�_sh�����q?�H��G>VD���7=Z��x�c�*;!�z��J�X��\ewr,�/��������0U�Ro�c��S���,�J�1�@��m��c��@�R��t��|�V�=�-����a*s��xf�!0���_��%;)<�NK���u\\�ba�(=)�a-K_�'�b��(;0<xw�F{7���-���J��W�24�|���0�B�/�v_��q�Y�HQvTX���VK|���s�0�L��(;$�.�L����1���l����G�#���m6��R�o�W�&8�z5Y��oo��������������u�����:V��T�MYm1��p�#�oP�i:)�����H���h����,~� L�'���|bu�e�b�z�b�J4V����FNT�k�mY���������f�[�`��p�,O_�G��HeY�3J�TVOkq��d?�g�nq��]����^|��8%/-�u]�ve��j��W`�������
�'^�?�n��Ub��OZ����1id{���p�1jO�,%L�m_����,d��(������Y>�=
We�\����E,����z3�6J��))*��nS
u=-6P�KZ�����5����yGY��G �	OX���*�3�!3��0���?��)�ul'Hv�zs�s���^�.�ha{���	�0�7�����u�Oy��f��1�5x,D�y�#���&������lrf����F@�C\�J�,�
l�r
>�{���`� �zN���<�!����4F������J��Y��2���%V����(��������A������O(�
KJ%��b�zd�r2fF$������x�4�P�2���X ����\��H��""�4�!
�hsA��X�>������{�k��KZ�����wG~Mn}^=����+�����)l���ne�a8g�p.�f���|���o4�F��W��C�����T��l�r>��gO=�-P��O��(=�����H���4I���)�"���>e(�``{T���8�8����	�<zi�C�e���n���yh��||
6��U������v1���2���Z�=���>�����4���xw��p�8d�����������/�A���
 n�L7&M\���m\����X��2��+$C���>/�i��p6������V����[]���/�~������������'V������WYv�-by�<�~a��<�?����mK������f�r�V���}�����:v�vf��f�Pmk+K�$'q�3�$%Q���>uE��{^���x9{>�XiXz^�S�@���7����X�Rz^���+~����^�eo�2)�L�&�A��'������������Q����;��]��=���b���|�=��.=_��J��������=/�����]�e�����7��p��=/^����{�������������<�|�v�.=_x<k
���y���2��l������&���y�Zz�x+�fJ��w2m,~�sp���B��m����|���b�my�
D��#���������]A�B�v�A%��x}�L�7bzM�����LW6>��+�����H����c#��d;�^�)F>�m�Eun��2z]�C��1���b��<�k�����=��V����l�DN���r:o��c����k;2���v�jZ����y���Qs�	 ll��������,U��#�k6���
S�����`v�`���h�x���������{��g�_6�F#>�1��'K���O��y]?����p�Fi�g�&N�O�vD;�hv����e��x0�� ��LK)c&
	�����&��|3�
�Xd6�@$�f���f�L-������O��j��UJ����Y1fD�@2���LD���ZW�f�b�3��#!VS�@��T�I�D�W3	�����������|T|��G���^�g9�J�mls3�k )k'�k��]W���;���M ����9?6w2R�rMK���#O�����i��+�x�Z���!��V"/��mB�`"��bMT������y����HS-i��X3�l�������V�o�n*���y����V,d�U*m z��@+�6_QaD�@2��
DP�f�mJ����	D��M����NJ� :������~�)��7%quA�^��j[�U�u��8��:(���eD�@"��
D���l���H:2^5��M�����D:�
�k����}%�2�^�@B�<��BWk�j�`~T��1IL�L{���y������)W xl$��T�:���k����CS*&e����e+�V1�c�S3��pk2���iW��1BC�o����)X xl&��t"�1	k���#S:&f�;
���J��N�R�B��1�����j��4������m�����R�!*Q�XS(A2�~8J�D$6� 6�������XH����T�t��R�Ge�/��u,RC
>����Xj�5�s��X��(�#2��3��B��:��D��&����D�� �E"�CT
���0U�q�y�B��`�o7�� ��"	�b����E����d�BR�����RJ$!:�wB�H�lX*�_{_�J`�a�d'V��[��U��PV�0;5��0�D�#�$"��md|0�%��������&"LJ� *�wH�HX3��M��E����:��-�����P!k���[��i0������B]�����	���	�yA4������	E�lM��������B$�o�����U��m����������������N���Y����������a���!J1D)j$bk
4�&n4f�t�h~��MG�b�x
��B�z+w�sw�)�o/Q�H���ic�Xv�%��[�I�<M�\[<������
:���;#b�oM}l ��n�[[�DC� �w�0[~����*4���n7g����:��i�����Aw�k,7S�S�L�j����NF�@"����H�P�)�<0���=Q��IW�N���`@�y����$L����H���F��=;����'���bR��A)-����#a#A���{St�0���|�lA��ib���5eP���6,��HL!2�5��7�R:13��F�F�P1"�:���{\�{X*&�KW����B	3�� �o�1"} kA' }��[W�������\���;�Bl"����b|���D������h���nv�'�7.0V�G��1�����2"z 
k�' z���!��P[�M�PT��S�����0D<��X`$l>2���9�TR.6A6������f��ry���^;n���I�[�E�������6�p`L�)DF�N� ��agD��DP���&=��?E<�|��p_8��VF���"�ymI�Tu�)�j�]qjD�@��D�o���
X�%c.�%j$k
(�&��H)Q����#a�1��{�].
Q@\�G���wy\j�)��r�0\���{6�� �*	"	
6�w�����l��D����h�<bS�(%���U`$l.}���E�`t�O
c���D�r�t+c`kg2ew�7������:$L��!nH	I^2��"�qWSA�����?���m$06{�^�I�N�	��f������j��Y;���1H��Z���6�e���6cAGtd$�j�������A4�o���n�)zZ�r����qW�d�����q#�>&�����40���Z��	�qw�y��S��D�j1����e��l������1e��9���:����7O���1�_��N�n��9T-u.�V�/��:��q��DV87�w�[�W9��J�����ja��E��B�������8T���v�H������N7�-�?�Im��Tja���kF�&cZ	a$!%l���:���`)M"2z�bs�M���t�o������S���.(����a��*����^��=������^����n7����I0V-�U{@���d/4+���QdH��!^�Z)H.�M�\���)� �X?���"b|K	��c����n�d W�D���t/�u�0���������}���+E�\��#�>���*��r�[2|H_�Fa,�w<b1��4:�6�@�������Jn�<2��F|x�q!���fW�tt3�mY�=�.�w�\��a��P���1��sa�o1t����5�N�iD0
�>E,qXy��&��o�����>�f�?�������1>��_�/�B8T>��a�X��&=6@l���,t=�!��i��e���-���!B�8i��������ZTC-��KV�#�E�GM���lBk���m�M�
����*k�&O]\����m�_���s���%|'����1^�r�J��P>3����<�����L�uX��~V88b0�b
sh�<p���>v���v������(�1����ov�W�7;WV��79�Q.���pD��kY,��3��=��C)��p9@�!�����������RI�>��p�9�aO�C�DNA�CB|Y?�9�%sD���~�qQ�]�.��s$1���x�w9p�XrD��Z��r�G��������@ad��H$N=�����c�B��r��P8,�b�A�ti�O��/�@c���h$�G�&������}��	�����\\[y������������~����������Ag���d�^V�U|l�Cg�P�p�v0�z�}������^\�R����&��!���H�v�8���'}ff)k�
�����p���Gr��f��%����$B�f5t���:j����:C�w9�:����,�[.�e���������z��\W����%� ����4| Q���t9Bx��K���MFx8����a<K�}�Z����rD�(����'�p������pq��������Y�6�������L���i7�R��}}���������?�gu��bU'{�B��������n���_�v�)B��9�)��Q��1��u��%��j��P��s�i6�M���h�k�,L�o�vO���h�&�O��?���U�w�I:�u6��sS2���}Re����l�"�9�4���N?$m�^4��������kn�&�z}��@����biB%�9�G]B�c�C&	a���[������W����u5_%�I�H{G�p8F���a���c��N��������5�uW|W���:Ck�;O�]��Jv
����N�%4���X������\67E12���T�A@?n�jo������9K���V)�^*��1��:��A@�X����j�L��k�'�x��1]P�f���4���77%�lI})��mO�P�4�jy��I��~z�?��)��wy�&��{�>��d��Mj�9<2L&"�_���M2��u:���d��A�z&��%y�@dBG���QCd�l6�����M!�+MZ���W��!���.��I�Xl��v��K:8�r��l��������,S
��������0R��4�S�.�	y���.m,�������N��!|�]lyDX_m%�'P�GD_xn��{PUq2o�'�Y&��������������+�|�)W�<<a��r���_���A&`���f�[u<����L�nn��XeM j�� p��6�Y)��������P�;����$\
cyWS`%�
���S<��9?��NF�!}����p�\��K����u65hAkx;��h)����ByZ%�4F���m|��H9>:J{49�)�8:B��#�3�Y���O�%���.YT��.�{���=r2��m�x��d���7�VP��x}�p�M��ab{��������Wq���7�[b��.�7��<�m��m�/4��v��c7�g��x{\��V������I�D��P��M�=�}��^FdZ�%�SY��=�l}�����w�.!].��M!40�����i>}2?������n!4j��7gY�Y?������/.Xi������?#�i��"QS���dho|U&`a��;��t��9���=L&�����W�����1�3<{sKR�����-�� �b$���Me��Y����6�{='��E�_������J	S�N'�����L�w�(�F��.�R���T��x�^��_��t�K��|A3[A�o:Yg�Gh��� ��]��f�T�Wn�=<�\v����'��1&�L*j����nL��Ow����>fA������	*�@�X��
�U��a�������f�����I��v���d�?�,��y��^�����������w�>�y��>�O4�"���s�l	@����d����m�z���n�����x����kG�PP'<����������eoP���v|���3F�5k���Gw����#�����,���S����_ ����<�d�;`t@@P����#��n��<��
@Y��K?� l�����P��r�y��y����Rqxvz����szy���e��.�=���o+]wv~�9G?�L���]l!����7��;������0�{�����>��<Py��>���dr�h3'a[9���g�A�j��'{]*;������d�����fC�����vKjh�������B$��:]R�n�0�O_X�����Y�E�hU�t~���x�������������T������Az�����5�����T��ts1ppu���b�6�@s��i��`�����7'������lX��D��v��2Jh_6��PFf-�zm�R�p�e	�5(�	�f)��J�h�\�%zU�P��aV�{W��:Mn�{i��:����>m��X�S��EG��,�������r�J��I�q�����p�^'�YU;"�d�����lx�Gj��k����]"7�Nw+����������^�6M��H�w�cv�&�v�$�t��3�h���;��G�-�6�'��L��s��V[�)k#���cq���8o����K=��F+��������������-V���e/�>����^`m���q�0���������P�K�8@���0BL��.D!�Ua�B��	&O�0"�R,�>X��<��("n]�"�	^M8�@��_��1d�(���0�5w�L�`(?x���jX.PG�L��S��jj��HV[
+���d�Y��]�o�(�A�������B�:�#����r��'{���vm���0\����D��n��7d��}�� BT����r,�U��������0�
�w�O��1<�����c�+������314����SF�RhP��;!��o�8A_�@~���Y��rxP��wf�����������$���I:�(I>�Q�e$�L�J�Q�������I~F�d2��T�Q��h2�?�$�
W/1�^Y?��P3��YW��O@W��Q�L����d�������?b�v5w��'��]Y_������~�)N�|�F]�������jW�z�wn�tkvD��f�<��kv\C�����q����o�z�&�;!�;$�q���p-:Mx<���u=�:�]��k�P��$�>�9>�vIP����\��6\{�B��-����zI���|wD��L�T5
�O�z=P�@=����'�����4�����C�SF��%�|���cGKS�T1�n]��?�����������P�P_S�A���.@����F���v��E'%8
W��!M�
v\����	�?��(~���q}�o%�;��3�������N�u*��J�hp}����~�v����i�8s0B�a(�M����,�[5�2L�r'�!1�6�;�T�y�'�;>'w���_K��8]���l�.V�2n	�5�N��{Qu���01.Z0��i?�r�W��c^�t���s&0.X0V��1�E�e&����
��+�sNvd�������8_�m��^3�����uK]gz���6�x�J�%�xNZ�)��h)����#���A�	w��n�MW��P?:��94m�%QM����t��7S��sM����
m��~�����6��9P���	�e����E�P���7��@Y�X=��.iZ�I'�$�H��^����J��O����|�X$�U���!*0v ��W�[�k�y1���-���[�pl���V��?���G������4�����,5��y��=9A��QSt�0�!��k4e'����"T���TW���8�b�<��k������r�Z�.-�v+�C;�e'f��{��[�m���@�~��	|b�~�v�Km��Q?���3V ���Q*����F����c���:0u�Y#�D�4bG����Sp+	������S�K+g�@-����C�
�4DD U����5�0�*��'�	�����e��Ar���e���<�"�� `+`���=M�io����������<�l�5}&F
`\58���%��x�n��)b����m2B�qJ3TmY�Z��������C���d��G~��(r^�5����!��f�C6���%����B<�n�,��.U�x��u���4x$	�������u\��c=p��Pg�68���n�^���Y�^_}����0JT�)*X��+r�*B?���"�'��(��r`����[J`���F�B��)���D �H�������x
��|��<Z	O)�����cJ�1S���R�0����*O���F��"D����Gu"f=�f��G�SHq�N�{�Db���OZ=�]�Rp��"a���=lg��J6I�����
pm�:�v�ATC:��x�/�9���N5%2�.���>�@�1��:UUvVm���f��J�����U�#��yS�P�'T�8'��	������ �����@�f��r|��*e��75	���6�`��*!O�P�S�C��|Uj�<v���
��@�
!B��}7[$�m�B�M�5-����"!�a�{d����]C��b_����gT	�*�k���B�}^
!\O����|}Bf=h��	�*�k��UB5J�ma4B(!�0t���6�J�a@�@BX�6M�h�(��\MB�gT	y�A
�����=D�
�=�B�w�+��v]^���Q��iX��V>��<�mW7R�m�v�SG��Q��{V,.�.�\�����{������.�Sg���l�P�'{gp���l��;��T�k��6�R���C���v{�l
�@B$YD��d�������q`v������u�2^���I���{>j��,O����L���:H"�5!]yH"Y�k��VWP�����K}q@Z K"���@
v&4h��h�<Z��<LTK=kB���n;h-��c����!�J=H���j�.��v���K=����y�@M��u3�^e�B
�A�H���]0)�E���o��#2e���*F�iA5��i@��J:&����4r�_y(O
�C��PD*������A�}�fb�*�aN��P�"�����,��8��2�u��i{*9�:D�2a��XD�S��=n���R��)i \���Nq
��������Fz��h��W�.q�h��J��B�1X���G����P���#_�1G�S"P�E
��E��<J��VvB}9�c�6�a���g��S��2~=��mIB���8���xQ?��/��'yr����f�d��6�q�Ev�%j����:����V�@=
���O��	��h:#{A��+N�v��������2A���=�p�%�'w���|�:V �EDf@a�)��J�\��#��`k.�(m���a%S��J�96���w�v���G�������������������t
�J�m�����4�+��b��N��|��/`v��$��9&�T�7����� ���86-!v~��{��}���{?`i�Pt�1]�P��-
 ���AYP�t�`Vb���L������s����;/�u+�j;��r�������4jP��=���P�}
���=i�(��R�_��?�V[����X���� ��v��
����v_8���#(}�������n>��F��i�X�Ri���d,u�A��� h�}PG��k����60���;��
�A�&i�P�]&i���-U�rx��$����'4K�����V���.�Q������������������ST����U�..L8>=��[�\
>��������W�����u�aA�"y����-ZB����y��3HZ4�}w�
x�:;?���F=�������,^���}���!�X���I���f'	�A��>�+�LYc��$������K�C��h����`���>��~��M���1��������$D[p"���!�ZHP�be+�:����DoAv��B��ho����|M#��	�[���������K���Yb�Z����m�@�����k(�T;T.�^���UQ~��.��(��v�e�����V��f��&w#P�yK���Ba�%w�z��Uo<�=v>��b���G��]�9/���E�t��}������Y�M��>��l2�^M��I����������q/~�`G�N��i���i�b�Kfh�������B$�H�T$�<�n����4d�=�d�p1W�G�go��-�4�_�������S��~w8��#��Id��v��l���b^��v��r�D�e�����q�K8� -r����vL���O��/�iN��F��2�J���M7[)����`C���%�	�f+��J|��~U>�\R�����%������
u1�=�C�;���t�L�S=���AO=Q��f_�B-�J��I�q����{�(�N�3�; c\ c$e����C�>n=/���������T |�s�������c�����������OB��������x��^8�@����]�|t��}�y�s��}2I��t������������vf��������v���Nz�Y��ya����tl��*~���X�������:�]\�^���<|J��6��h;��y@�����w�Rsoh�}����0��~��%����d���=+��E�Xo^�����;/�O:�Z��o��gx��3�f���m�f�e����c��=mq�1����e�Q�s�����sz��-O�:K}���u�����5F����� ����=a���Ow��%}D���5��H�1@���YS�o'o;���Vb��?vg��������O��h	$@�musP�@�C������z9(a���GT@#	P���9��@�C�U�%@=�JsP���������rC����u��3T����P�j`[79j�P�C/PB%��m���!Ce�P	��@�lk��F�=�"%TO5����Qc��z1�@��q��O���\u�nXMO%�eQ&
H�1�!�^�h�~5��f�#�F��	3�J!�I����8f�}~��3����:t��<��J���?��:v���A�������� c��db�����dHV��U����h��N�'�h\9Y���o+���{�FIM�H���*$
��#B�xy�O�)7���}�z��
���P,"L�C\"��q�S���y�x��M����k�},�n���wL�3B�U�<\g���!�����m�uZ
�D�F����*���*T���"�b����9-G(��;Jh2m-��XqZr�F����#Oh�G����#��}����~�=L&�j>������g�,t?��������`t���;��'4m��24-#-�\�Z�����Y+�%�[(L����O��K�v�~@!�9Z��dS�������t�3��G^5|��/���$���q�C�'��+�{
��[��S���x����`�t��l��]��mi��,Gg����*�Q���������~:�vg�Tj~��:>������<;��n�M��-{Fy	��?��'�1���n��%����e;E�sO�b�?[�2�'������}eo]����	L��=j�(�d�|tK1�l�����xZ��_��W@4��,+�hv�f��p\;gX�������	�Y���\�-��'��-��gk�<{Vd����;���w��7Y�Cs��]���@������%�������W�g������}����"�������	M����|O���Q:��'�ze4���A�"0��_����yz'���A��q4K>/�W�����fe�����E�;�9=B�/��G���E�����O�������������T�Wi��b�P>
�}Z�w5~�3�����=���2Z.Y�0��
�-N�����5o��,}9E�SI/�u�A�T���l�H�
C�y�~$�`
U�@O)������G��L\ �-I2����-�[j�mr+|��W�}�����}:jY���M���C���U���&���d8M�p���),����r�T����,Z5�=�8;G���,�.�cN�a.|;yJ;�?N�y=����Hf�wKmwM�K"�
n6k�<H��h��i�e�xv~iIg� i��,O�X0�K�0���3�
�,
;�]�io8�����y���
%'g�F��K*7*�~}|)��p����
'��p�&x�m���_���������;QFsGT������O����;���wgp�����"�5������X�`Z�.m.�������K*�+��dj����zSG��ic��y�3m�<�l��D]���^���N��b��!��f��9�PO�w�i������L>�J.�@���|+����9�����e�#������lN��h_��)�%w����Q�d�Ei82t=�d��lB�9���rNF���U'��5[Z��K��4VZ��f9���v�r��t3���������<h(����6
'I��4L����>t��I��,��T�Qg'�Z�/��g{J�5e�c�Ue���kc;���_�N���I�=�����ni�{c�]/�� s�/�N�u���Q5��~#:��o�^S�*�S5����=��t�]���w�\EY�4��y�(��F�|�(�A\�����E������������.��g�"����*�%)%�1� �RU������W`���HpX���#��v�������>��n[Vq&�5�/al�fK	����X`�/�B�/��An��+����:�L�H ��cM�|�U����
h�����j��$����&l�&�C,��G�5�v��R��;7��)��]�+�L�t�2��J�������|�X����h^o�[X��g��#3�jj��������pPZQx7�N�����\3���-�a�K����z\���rC�q��]
U�a$��Z���w�����
t�]!q���N�+������m�,|���-�E�	��� b�`[8���������$ P��%��n\��2���B`1�������wT�{���
���*�4�[���(\�J���k|�, O6|��o6�Kbc��������R���dA��
:i�&�`Y8I�&A-d����	���]l�d����J�Een`��&����Vf��D��Uqc�K�A �4�T�*����P������[��!��b
&�E*�E��Z;���5n���9�c��y����z:��9����{�3U�+�7��XyCy{w�0@h�-ve]�o#�M���-	��6�d�l�5Z�A�z>$f�OT��F�@S�8�Ks%���~-
�,�Mi����
	���UHP���F�.�oQ�5��*)
��q��{��G�G�MY��/�f�d4M�N}���~�J���|��-�v�c�+.�.�p�+�Y�=d�nww�����Bx#n
�5��5���,��kA�P�������)Kkv�NRv:�h�%4h�'�l�����e������X�1M�8�6�`'���@o�m��=�
�?�#��|u������0�-�@���nyX�6ly�k�����<3[Qa���B"�������p�V�V�1x����dcd��A������in��en����k����q%�D6�KZ����tI�$w����;�
��g!�e�����r��j?d��X��!7��'�\����/
����2��)�����V��od��&��O���
��������`T_Y9`��HO�����I���7���g������� r�j���N����G���l2(�
�l�|����X��H=c]��,lEn��c��"����y!l�K��K����x�so<3��u�V}6/�-���"�r�S�\��z����������
e=��O������}H'�%[�tH����S6�@��{�RQJ�h���W�����	��4�HF��&���` ���t�M>|���M�&�~@%�~��aM`���E����)�O2�\��8��-}&�wKD�NwK1���k��[cAM`^�_�+�6<�\��y���������<�������>i���|E\����vs����m����g����"Ub4�y����zv~�9G���a����6������r����b0ON�Z��m]d�%�I|[[,��d^(������$H
�����j��H������Vx�5}�!�;����f{��r;�R7������l<�����xn.o������=�����(�-�����=���c���f.4�����yI�9�W���p�4���a��No4�#�5_�6m�I��r:]})g�����x�������I#��S$��O_�t��`��l8��,�i�B�Uocq�
��D���ui�����k!I�H-�G��k��Zc��[%!�,�$�`�K�_Xnq�>��3`f��E�
�g����|����}�]����g+��"�`�G�!���5n���v�&���+��L��E!�eh��2������5H_n���(�����FDW����s�B����3C�\4}y���l��('�:�h�����}0%��qp?������-��'�m���b,���EB_�z�;.��zI�����Z��/W�vw/�u�m�M���{����_�_/�?}���V�H������+��o�%E7���_]��k�_�����r�T��'��sc��x�����v�t��2�c2�]���l���~MVw����������Kd�1!H3�K9Rq�Zf���o����k���� $�;iVi�Y9�	�Q1��?�Yg���1��!@��w��g��mM��&`��r���S}�^E>j�9=-��.������p�B��;��F_M���5��<\M�2�_���#+��o�h8 �i<K�MR���Q��e��<�)`n04����u����v��\�������	,�2#+�^�N�vk/����e^y�����Z�b.M�8_��m&�/p���.��*�.V��o���e3E1�m��U~B��������?\\����&W�Q-V��Fl�� �����j\fg�)��:������U����|9
g�M��~��F��oC�d�
�qt��Fl��Yu��uN���V����;��U����$t�WO6��n�h��D)�G���������|���}$��4>OS��_.~�������E��3;�:wXp����=?���S��k�P��JL���X��������5Q���0	k�������������9I��<��|A��U���q��r.�90�G��[q]�������w_J��</��PEr�p�[�\��[�� 1�'���C��� �Muw����E���N���ms5�<��������~��J���+������{���^�&:c���)���&�*��*����*�w�="k8;��?���q3��g�C������`6�Nu��Y���g������I���gg�������5������tv==>#�.O?�e?w���7�&[=p=��?����.���t�����z�<��;���;�.���b������6X���8P����t|v�k�<G|qo/�><'�#���E.u|r���\^��p|��Nk���S����wI�������,���{����u�Sn�7S��O�//���S?����\����������9��S���_�{��~�3Xu���
�`���i����Q�k����6�x�K��u������&�`���	e���
i�"��<�6?mg���I�x�!,����2�
��Bn�!����^~<>o���_�o��u�y	/�D(E>��o��[vf��'6�l4(�	�����I��9N���E�e�N����k����?h��YL92���zP���V��,Gk�oF�T�{oo�����h{�@����tck�QM����})�S������V�f�:k��O�=�����R�*��2�|eS
>o$�O�W8�e�x��� �@���S�2:���.�{�/O`���-�Z*%.��6�����di��������	��l�vY&�F���el� ��/��(a���Q4���as���Q�mp`�GR�����<�4��"[8���~l�$�F����O�%�{��b��H}���K`���^���Q(�"<�	�APC�G�pR�����',
���g���;~�n)��zT�j��0Y!��P�P��x�%��s��6/0�)h�/���p�Ed����Z�Y�`4��{	���)+|x����n�-'�6���n�������Z�GM�X�pH�����dtk�X8&������t� ���"�3�9`
��p���H��G:6'A����������4[a)2��VU���k;P���p���d"�B�r����X	��d/�]M��SdH��*�R�a�{�p�u���~���:�tm�oVk�G�P�b���t|�%��H�����+�"O�����A�@�{�.�q�g����5P�M�	��al�����ks1���������#M��a��(�,`�{��lX�Mf��'�T\gnK�p�s����9��9�r��u-%Mm"�����;7�o�[���}���/����_���-��WI�����o� �{g���K��&b����z6nS��3�z<�a���D>�Gk���5q���u�<���"
X�������7
��26���o��$����8>���X�|�ud>����S��D�L��\,�/�Y>��QR����2����|,��h�C"A�)w4��S������� )�+�3�c����5-V!���@[yp��H��z \�
����?�#�+97P�H	 �p�r=(�Rt|P��C��.�<����V����R�tR���1 ]0���>������q�*�y��=��4�BG	�+O:\����C�C�
��}QJ�z�<��#s�
,��e �3t�`���&��E!�@
�K��C����&�'<�AG_�)�Uy;wS t���(����!�������`g��R*�'<�z����Kpt����(hL_O�:��Bw��K�R�����s�B�A�ZG�I�lW*6���JQJa{A�&�
tt	����>;���@$&��/�R ��P����U �<B�c�O]��B0�H�����3�zt��k��z��t��P�Q`��A�	)�iO <\�C���\B"�#����:�����@A+��A�H���C����\$\����t
�@�V�X���o���	����h�)t�*I��S�}���e�@�$9^.!9S�A$��BA��F�u%s�Qj��2	Ut��C��w x;s8k���3hn>Z-����/p��?����T,	3��)3#ok��1e&��*��F&,5�*�����k���.�jU�Lf�U��T;�-�+���=�dT���d�T�c�
]q�u���,��i�-6��)����L<�L��}+�)����B�����Wn"�?^q9�B��VC0mf� �.��;4u�P��
rr�fgv�D�j^~>�^�i�?z�@��PJ?w��..O��Y��a�|<�~�a�o�������/�6Yv��a��l��(������x���[���|�(���g'��\����`��!��V����ip
���*N]WP�.���49��s'������@	�&^�����>hrQ��)e�������P���y��)���[�����.����m��fs�y�u��<��-�o�E)��2����C����t������.�3�.���K[�(�H!�5;����U�1	
S[�\(!�'+��m�@���8�AI��U�q��xmu��4n`0T?�����iGd��������F��0�?�	��hr�_����H�GR6�dA^q�U6i�U��c�����vTy���F����7��QU������Co��&"� \�������0�lX�0g�n?���4���6��A�8��u;kR�!��p�����gF���=�[P&����e��S6J
?ovv��q���(s�}8�������>6�G�
�c4��'�C��~�<w�{|����k��KrX���'$����&��elc��OX8 ��dt�]�����6[�_'�����lA�.���t�l�Jp*���P��I���t���q���aX�@8^�:H<E������.4YN:���2��7�TY��Pni��X^y��`P4*=���4d�2L�A�)L^[+����a���@<V�"H
(S����G�B�r�,�$!$Y�:�*
U�-�ULA`��T.�����Lf���EC�����V�f��Il{�a���2B�N�H'�������r��c��������[a|�9m�-��=f�^iy�Z�h���������{e��P�����G��#�7�d�o��*y��C/� 6>m��H�%v���zd'�NX��+:$��m2�o��|T��0���������X�J�������Q���z�4+T�q�VR�Y�mY�\���(X�A�S�m�R�Zt\z.��X�n	�;��2$�N\�l���*���|��V��
�������k��j�u-�
�3E�+�~U0>X�(Y*i�E�BCE��X���,28�d��A���#�WC%���[�[.0���M�B!C�X���z��b�d|W��RP
�,,���q)��G^���[�����8�A`�w,�C��)����Z���k����*|�g�\� [�-L6v����	\� ���z���
*�\��U�E��dx��M��&���l�P�P��Z�z�������]��V�d�����S/���*K=Z7>F5P����T��g��<^�����A��'����7�U�����0�1��Quq��
�����F�p�0������
F7P���b������1��P�	T�-�������8{����:N�;���;�:�I�9��lCy�/Z�NR�Qh�~=��������`�\��5�������Wa[6|m�lHk������	����(���
���w���|��>^�~���7���������	�[j1��m�~�<���-S����J��Wq.r��!�����i����H��/��v�f@����8��Ga��R
���
P+��W?����s��5�_�-+��t3@�0�r4��#>���h#��M�(`�8������F/E��L�|Q�a���q��f�g�V��^���v�����Ck9�OF-V#��k}�����vU���Y��p��x��v�������#r8�:�&=V3�Z��S��`���+�"�w���C��L��F����3���?~\S�>�����c��t�	b��Dc<sW��,,����,Lyi
�*c�-B���#�#
���j�y@4
��L`E�-d9��G�?��M�����7 ���$V��6�22����,fYQV��W 19(�z������M����}��W��E���=A�*!J�o�����UFX�<�<I~*%N�<`u�����}B��N�i�=uv�#_����:���@r�#d����XX�S}8�E�����02=��v9���[��(��`�g��j��`A�_�Y�_�3�������I�y,�_5��kH����1�!	����F~�x����2Y���������l�av���?��"�`��w����g�@��v5���0��7���
RPp3o&xP3��f�<�"���#GD�d?���A���2���h,<�M��S�9W�<�Hh����|FdO��0o���[k\ �=*�wM��{[����a�>�\��x-�=������{<��_��m5.��'@��Q[��a��xp8�hs���x�����0�xr��y��}������P�~���~����9|kA�7�����)��������E�K�����"M�:����������3�����Y�_�����7&7�����l85��z��1T��$}���g�./>n�H�%�Dg�w���/N�3�����1�8�i(��,:���w�����{�%���Y8��q��%i����?�}86_4�
��O���ObZ�M��d�ex=>?!���+��@���`8��������X��f�C��~����}j�~�T�4���������yd����]{��~��7�������{I~�Bh�0�#����-�&�������e�y�Ny�������&��yH&:0��v�����A8
��M�^��u<�^��i�m� �`n�B������S:���I���&���&���Z/B���c�C����3�K�.��xi���X3Q>+����]lMq�]�K�����c�k{�<�as��������c�
���7����7���4����,���E�q��dB��4r�6�e�,yp��[i�i�C%x�$5�0�������?-��D+l#��#D&� J�tV@S��f�7�&��n%�R�m�	�������7�m,���O�b�ZH)��}(�������������l�
!
�(�H����}gR�s����J�6	���k�{�����
�t�Y�0Z��������h}U6E�P	�Y�V�k��r��D��D�<��TN����NF�(6���=�r~�,���K�^E���O���%���7{�I�5q�q���u��Q���W�hN��S+���MN�Nu����&��X�K���C���j6����{�?�*��]��|{�_��EwY��Ag���l2JV)�d7Tb?�ML����X]���pl�
�`��Et�.�0�&8E�p����T�T
)���}��?��b!���8��X�<L���
�D�T�$YEi�NR�v�����*
�Y�S�.���psQ	�$����v�D�����>�Z�*&8$\l�R��)*��1%�'j�D�*��x}��M�Y��<UE�{�������_��t��S��>����S���n�&8�:pUa?UOUE�U�cy��O,t�2����)�4���t��h���H�%��;P�=���>��~�-Y0��L�%;��,���,�h��u}�pS2?g�?��[t�������w�����]r����n����0�Kq����x�S���4�|cm�E�9�c����#8-�w�Ud�f)jKz'Em��!9
	��n/v�z� �o��P�8�����p��+V)�xyd�M��z�nP1W�-�
��4���p���/�Tj������H0�3U-��������|�\�����yu[��6����F���
U}Y�!�p9��
%|��[@.�cb8�%J�~��d]Zr��Pm_7-��[��t��[w�>�%�
Wq���d��e��m�������yn�y��F�l/�
~�WL(��nxy��J����keF���c�s��M���rs[~�_���
�"D)�9�4A_X,�������p��G"���3��S�!\�����T?����W�V���q^n������9���� D��\D����'����^�O7��J��iw�}�X��h��/�g8d��Y-���>���2x�,F�O��U(��	
����?�tI5MNt���?�P���4'�\��-VQN�U�p������q�yi�L�(��P�d���FB��O�y_I��op���?$:(.��3�X�r����S��Ay�����Eu������+����g��W�N�������St�� ���n�`>�w�jr�����@~����������c8����x��!FeS\��x�>@s5\X�QA��r�����X{���~��{�
�ui)���<������
�~jj<�������t;��v���_�@�r�n<��+��
�i��>�x�.>�.Ok����T�������(�)_��/eS%x������P�HnZ�8��2"��H�����X�����i������&���t/f�AzK��4��1�5����9O�_�>�x�d���E��>��n����������V��E%��(��fG"�]��A!l�7I��IC��$����OGk��b�N.O���������(
������~ �kIYY�,��Jw�}n���8jm��FTE�k:G���������'������7���c����(,�#ko����#����E�����L��4;jgXL�75,C�t���U�l}rz�b����N.�Ex�r*���/����]#�2Z��64�������������!9���4��br�(����:@U��"6��H2��~����#\����;��"�h�����7��u��W��r����N {:�bA������:K!��
������Z�v��R1�#�d����/&����8�)����*����L.���\�"����,����[��`>�0��rr��5��KI �4p5���@�}��s��3
���C
���b���3n�^��yS~�!�m��H,F����P~<9{uZ�ru,g$��'���o�k��x��q>����<d��8��h��Z�L��7�����cs�+�ee�ev ko�������4R��\�T��,���$���~W���Fn�{,��U@T���#��S�����S4
p�*����g?`��8��7ip]P�J�jX�PU���hv�`��������K$����*��(P�%�ax�mn�I;���P9B m6���D�*	t �/��N�����i������rpH���7��`{��MgE�|�P!N,��Zzy��W��6@�dGM4�J�K<K�%m0��#�g��i��I*7Q�W��t5����������xA�8H��/J�T^��Y�Mf�,����������6	*
�#�+����ENJ� }�S����Q�flY��
Z�o��!��l�`�i��/����YfG!
��5GWr�}��?jA��n�H���|������i
�q2�w� ��|/�h!����6f�"�����js�mc�����i���,Y��|�RV�����[vG#pzM�MDv{�D��U!�9z��xS���1� �_�k=<���ju6�-�����{wA����5&+��U��qH�;������K�8ZoF����L�`<��f����:T��Xv�����A=;����	�	��4f��u�����P-t�<���7a�e��)�����9IV����z����6S#���,[�v�:�inH�335���"���_&����&V6Z,���2����3Gd��a K��{��DV�Q�0�93!�|+��d=�o+4��{�Y���3�Y�.m��g�Md��n����{:����� �|���)�1��o�5����2�_�K����4��9>Ji�I��h~��m�����y+�)���?�'��h�Fb���(���N�D p/���h?���M�1��Y����K��u��Z�|�2Y�V�ri�����|~E�F;�����Y�\��'/�S9�������M���0�o�����L�JU_�$������~���EhW+4X,��|���j�}Z��������_�%�/�#�b7<�z��3����������X��*%�-c�y2vA#�f�Fv�e�f����F�1�w���?js���V����u������N^��seYH]�d�5
fU{!�z������Q�N�`�b�7K�x.G~p�>Ig:������
e��\	�r�a������z��&��t���������������M�d�����7�(m�6A������tTK�%�=���o��C[w������Cyp��eL����`��/�O��tV�V�8�g�o����S���M�2E}��b��}f�m�Im�/�+J���2��?�&]�|I��R<D�:H���D�9F8�H��5m��-���)Vg�vFt��.����:��
�x���<���?���9
]���G�����6Q�,�!���a��1�?�l�
���EEQN%js�b�P����[��,^����nC{38f��\�\gh�^�#F������gjm3osh����c�4[d��*�z������1��(&t������u�a�3��bTK�kg��qQ�
�ap���{�-�T9Vtm�U��
vt-S�tRx(�nB�e	���n�[Q����( �	����!F|X\[��i0��v��p
Ms�NL�c'^�6k%��\���3�j�#e�
~�������/�e���t�����<ok*G�R������������C�sG��j��U�Q�'�������FYv��}����7�_)������<��\�E�;�P�>�6����������p�##�=7����	<(H����d���w��������>���E�i���������>)o�J���;h0�M����K}�j�����K�k�-Q�v��*�V$=�@fY_Q Z|}599==yBdH��F�C-��rKy��eE�6a��5�����?��[=�|r�n��@#�H&�(��A�����u���E�s ��7��^��A��J[|���.O]���jV�A�������s����T)h���.C94ZD��l�����|�AG�@�S��L��\�v�4���]�����e��8�
<��v�D��u�8s}���XQ��*
�j��QlT�z�����O��tRt
����X��]�����4(���i�����1��\:�����%~��	����we0:����
.\V������W�x���>�Qe,^�bQ������5d�;&�-�r�H^bA��W����me���
n�e���1���f\�	�s����0M0�x������{~�����;��-w]d���Ew�}�%�mi�q$0y���Qqr	'
a�_��h�'���E-Y����f�
�
���O"2����Pl�=���-�$�g!t�8Xk�:0m��JV+���j�UT3�����h{�+��qS��f���Sb��1�'"`yz`u�p/|���W �
����$:9C�w���{)t\�ih�W�;���HCi��q%���{8�	��\�C�n�5�I��?rq�����@�/[z������n9��@����[���3k����SmdM5=ZX��k��vx�P�x:���!|���J7?ik���9[{�O��# y�V��MN�����ie�^��Y�;����p�]�v0��K��8�A�5:�)r�QN�n��S5+���0���B��q��_���;N��!���l*�R���GGL��g�A3Oyt=�W�AW l���+��aE��h��hB��m�������pYv��3(���x0��{q�eBA�&\L6��uR�%��AM���<����PgI��<1= ���Q)B�� ����L.m�����gL����qh+|�m����r=���ipF� ���k�M��}R�%��im����WD��!�������������{� ��\���0��� " PX65]��@���PDA<���`����F1MQ�r!��������\��&o��mN��=�8o�0u^���2�d��H��a�>��y�
���"?��	�p, EX��Rn}:�8@�t��XA.y"o4������}��j@Ty'34HQD���ug���X
UY0�9�3�$j9���@�1�xp��z�����J����y��6��y��=�������<?VN..N~}�y"O�����;�<�A�W+0��P�!��4�JB'�&�7��l|lQxx���P����1�YaD��s����J3�>�C�����T�i�0@�d�"0���
b��Yt/���}  ��T��������:����(���I�w$�c6���?01t�F8�4�-��k!�� $��DlC=
�S8�@�nL�yL�
f��o�9�1�Y��5�T��Q����4����=�4�`r<�4�`��0���
<�n����VB�	�|�+e��+�m��NL@��u� �&�z-��C�-!CiHOk	9����7F,�9�b�����P���g6s��j�6D�#�"2��j~���q���1�����|�e���J#'����,�n�!V$�J��DVdx^g���~'�EW@���@�5�M�y�B�(���2����4��t����,��,��������`�v�]L��q (!P��X�7��Y,C{]e���L {T�y	���}j�&3Z��u���9m��D��I���m�ii�-~l�-DVEO�Ii|�>��Yp��@�B�h���A.D,{������Mv��@8� ����r�K1-�`�uqC����?�,]Z���C�z>>�V�0�0`@Lh��n��aZm0���2�5"�&1��l��7��5��L%�����L��h(��z������;�
�a������3lp���p���b �����kUVo;mj�������`1�<��\��D��D����Y� a-M���f�N�����2&�V�CnM�Si�6#ng�8,�H+�����:To��n0�x0�{�0��x4Dg���=_r=��Z���n���7�DKi�{�/�u
�����"y�Q�����A��:�����sps��9A�_��M�Vn��H���%
���(�{� &0;�,����}
�6R	��1=f	���9,i|��\d��R_yX��wn��X�\f}�{�g�Tu���$��U �v��k{����r^h�R��4�x������dq�������s�b�<��Ud<��z�o�<v�a�� &�%��uIdz\��5�����w���8�%�@n8���[GF��#��$r����vd�����f����@�!=!H��{�5��9�A��s�6�R�fZF5�Y*��h�`�Hp�����`��?�k�:zo�?&ClB6���g���Y(�&;���6'�HU5`�l����d�,r����-,(����#yp�8XT�"�(��
�����Zn���H�&��,�%+V�r�A0�TmZ[�'��j�4��Y���{C�k+�{�7C���&
(?�������M�):r��$Q��>2��ytD������%�p8�g���4��.��� ^b~�l���H��a�L�����W �,R����������*��Z�J���T<F��I��Zy�����n\C��aY<��n��z���(n�\GCP���N�W�����QY-1CO3P��7���#
�N)��!�G�Kf��H@$�ZPS-R��
�)���T�X����m���g�t�N�.d��qd��a������lc�kkc2j
�{t}/�N���
w��&���f=,����Y�����=`�6-A�����A=��zTt��n�S�Y:�c����E��E�f-�����2n���G�;p%��:tX��m�H���>j,j�����i��l1����Y�;w��t�
4�[�*������#b5���
9h����3h��z�K
�bb�����v�����
 4���X��C���c(���e��~�Q����,�v�&��b�]�����D��
�=�<�7�����������O����.c�[@lb����M
��H���Y���,��s�sS+��=�x0�:XJi�a	bXeX��,��es�dP��O��^m���L��|�w�
�0$Cme�5��O6@]��g���y����J�t)<t���Dk]���,�v'���&��&w[���"�.+�lR$��A��X��@��
mL�8�u��o��������y���kB��/���wL�����W��2��xV����:n:��[��j�8<T�Y�d�9����d��4�%��n�Az`�����~PU�V�����$�B�������dx]��^����z�U�\=yBp�5L���Gk2�}���H�y�]�#�d���;HpR'3UG%  7����?���������`�����S;<�
2Si;��� ;��.Z�;�M����������5��y�R{������Z
"/��N��F&���[�����K�bg>lgj�iq�)���)Ik�S��>�rzb�rI"�!����*Ox���_���?���k��o��j�����3|L�'[�����B�Z��s�7�{(�@;�=�����H��ja�X�r����d��+�z(�iL� ZQc�������g������;�I��&��&G�����Un��m �
��r��~!�$��&������S���N������b�l����:��v�)�N����4L����b�!�B�4I��)x�c���N��Ry�Aw4�_	h@&t� ���o�&L\Q���	D4�u�@�l�i������-M J�;LV�7MPp5�&�M�&�	�9�Q��;>/�n*y��+>?@
O���$'�R�v4|�W�*��7.J��[�;�z�J��v�&[e����[�	���G�i��]k|���%����������n������0��q����#�rM�l�;�+P"�*�2=���f��&�����r�C�����'u_�����M�FY�N	`~b�(X{C+��s��L���y������5%`��}gM�hl�v����wr�VW�!,4t�>��S	�O��Y@��Aq���4 �aW�����`�E����1��\����f���u��*M�?HYz'�E��2�P�c�j�����/7��P���KX(_-�7���v�B�%���
�v]�X��`�sl����p8.jws@��o�S�����������C7I���ITJ�yj�����Z����N����4o��������i2s�b�%r>�7FEG=L���,
;K[c��f�8T�iU'y��	�)��
)*IO���*8V1�O���H����jc������fQz��oX���d[�cX9
)�����3$\N>������Nz�H�,���(!�4������woZ�?H�%�sk\��oOF�����M�|=�.w����_�a���w�~<_���]�D���������������D.���*��&w�Q�?J2d��~L�uMT������D^�]��<�<'o~$������$�n���{h��-�5*o7�L�{;��
eX������(��cJ�o�.�l@ru~��[��S��{U�U�y����4�����*���0��FD�lS%WT�umR����lc��^��go��?������w���/�n�o������_��?������gq���DN�U_������n��O��j��^O�R�&S���[�2����s��iF�
�h.�N���3����t$���ru�����wj2Y�n�s���Rx�
������+%r\L�j�wl+V b���`5q����a
I`��R��x�#=F��	�$���e�0u������kiB]^��*\e)j�Q�0�}��,Y��/�xL����'s$��_4<��=����l�=�e��
�SP{p��/9�����v}�L����A6��SB�D��bA !����;�b{�8��XY:}��I��O�|���8��d�l8����G�0�;p�H8��<�X���)�)�t�0�C,o�C�>>��j�H�"H2�(���R�c����\%p�jm"�Ca���'f��q�c1~`��<���k��d���H�����0���>y4���Lq�F�����A��l#��:��|0���o�?���S�|#���+=2h[��u<T��'�p4X�Lz%�@���D:��.��kS���E���uL�Hz���+=6iU�@�,����7$^Dx�?�z�UAQ�M)�@{��`h����H0^�x��D|��"��F�g�x�M��t�Fh�X����qV��DodA����F�jE�������)��cOe'�D�c�(!z����PM=j�GMqdf�`�%���(>�BZ��,1l�B�B�qA9&����&)q/������B�^YJ��7�����M�x�����wgo�����+�	���#�6������OGji-�"�^\K.8h'�;���`�QD ��yP����b6��������"e�������)4����Z%DUi\��d8��"� �@�3h�L���M1~,�=��c��C�UY"4Kwo��>����w��w|���*��4��R�q=�W�Cr
��tw�Er���i������<8�w�Y���*�(*��]fQ\(�2������|(���,R��,�}fQ|(�*x�}��y����3V�w�%��|��4i	�2@���%"(ie;MZ07�a��j<�i��,�&-����%"(i�{IZ��<�JZ���{&��,g
����BB��|��w���+���|�P���a���;�Xx�)+���9ga��%i%4Ve2i���Y�i�>��0�}����DVr�j�;(�P����[$������a�qOQ�E�v����{�rw��Y$�i�2����c�0�j���=�����=�0��$�C�J�����J0�X �^
�����;����^��u������W��~��8f�U���9���K��]�0��Wg�u#���Q@�>���0�������-�o<Af)���K'�����������< ��j��|�|`�~HH��>���HUX}!���������\�|T��~����k?V�0�d�M�mp�l���v��m��uj�~��X{��33�~��������
wn����.	����w����HrK�\y��[�^��i�u��Jv�P
��$�x����|��fP�b�V���n�W����Xt�K�������|���^�l\~�Z	tg���
.=^��|j�{]��+:��	�moBn�#��9#k�g�p�<��n����vA~!���fk��L��l���;�g ��l��p����.V��D��Ox�/�v���P�C���q�I�lU��w��v�[w]n�:P��<g���P��� ��F�u�����j�b&>�����ADb��H���L!O�bhf8�E
t���zGJ�K?�1������J���X��hI���!k,���n,W
��������\7.�[M������{���c(���C�d8�j����^R=}����oYj4������K�u+T_�����-C+�6��NZ�:��*=���\���OgX
����
��#bA�0�X���u@���Z���G��m�e��#D�����<rLcV�/����P�{9�z[]o���<�%��e�*n�.j/v�"5�Z�2���s����n����$�/�\-Ck��s��i����]�M���a�U���]���a�K�T�`Yj5�sd�����wIn:���,��R������R]�LSlY��V��kR�e
���T�`Xb����]���s�����gf�����bJ� ?��]���'t ��M�\�b���|�~�@xl6:��%,��u��
 <5\�������a��A���{�������&�/UW�������%D-�a�[[�� `�[�wV������LCVk#�k��������{���@Fn�����%FmQ�+�Z�H�������z����G���0rp#V��c8���NO��!5�>g��p���f�����*�_����V�,�=����S�dD�W��2P.��Z����VK��F5X+lI��-����\^-��2���T�|k}��@�C;���}�2I�5�d�w;r
�MCr�\~�-hT�Cy(6�������0�]�#�I6v�v���Q��;��k�{(=_��8,t�w���5�{������u��?�����Y��[�O�&h�|&��v��mvXu�\'���������F���nTT��n�����
r����x��B��<.�	y�$HC�K�6I����R�k��1�.#+�RSO�h��8wo*c���>%�� Oz��C{������d�������,�'b�u�����bn(�T���XQ������eo�peE����g�(���*kP�h�5(�N>�_�)"E�4�2��B��hf9q��PL��A�
����;wG��/���u;	kPbI�!!������5&�b�F��r[P�M����7+���F���I}��5��"��3�73�f��L�$������3#���[�l�f���H�Kj������-�@n�s��5��%-���u%R����������~��l|��T�
V�������1ysI����[[ �z�b���<[��s�!�2�e1�N�9��c��`��, 5`r'L��&a'�$f���(j�b��v���T���s��G^�/�J��?(���#�����D��~3���/R����SH�Ls�T+9��F���F�:|a��~����Z�0�Y+;�c��>������Y>����������P������*�P��;��m ;��2��6��Q�3x\�d*)]W���D�6:����H��7�'~��H�)5��j���e@7�Rn��"�,�?~.F��O^�������3�����Bt��E������$�{�`��B�cN�8Xd�U�q��yn�B;�f������sqM�|"���9�t���:p\�E�@|r���	�i�����N�
��W_�E%��3gTj�z�es]���U7,������#b@��ihmH{�u�ai��~�?� Tl�:A0��%wH'�������\H�)v!e.��	�;�/p@��	+v�Q7�c�0{wu�=*vC	�u�po��;�R�pBY�,�'%w��'��s�.j�Q������@�d����5)���-L������`\�q�B8Y�;��� ������AA�s�@n0����b�J���A�
7��}������������������Mn0�|��b(�������;�:n,�dJ�#h�?wYi�z!V��9#���T�i�'2�\:����0$sL��x�1��I�s����_�������Ao���B���<�cP�^���@���4,������W���>>y��v�\O�0���u[1X9(�������n���	�$f��Ct��������'&��`�n�,]�>won�w���������sG�����n������n��:������>h?�c�Y8�?
]�H�o_x�S#��t�^�����!��a������7�������$-��M��fRk�QX(j�1�kt#����0I�, ��\l������%����k��02�����!eak��
����Q\��"m���f���y�
�2����<-��=��~��Y��[��p���`����l��5-���<h������$����e��YmGC�B[t�)68)��f#�����F����^
�c���C�oMG���m��F�Uk�����|s�5�����(���n��'�\�$�t��Y��4	�c���Wg/��O�w7)���L�{bp�e�&�W����H����ry���;R)^D�Elg��T�����?��	���'J�nK��n1"�������W�f���0��MX��� ���r
>�,w�?���B����p;���4���f��)z����q/,��s�I�q�-�������7�+����B�'�)YK�����SJ�]�P�����N�8%I[�����Hy4/�mYv��{R !M����4�K����NUo�-
O!xV�nq����4�'~
G�(O�t��N��s���31�D��	�?������^8��kt�tt�1��
G��3�'K�70o3��Y$u��0�8	Z��p�A���n[�&����Ms����apZ��
����l_B���9��.�}E��������������������N��d�+L]Hm5*A���T2��\��d�k%��Q���A��G%�B*B��\A�l��o�7I��b�4t�B�&"i^�I�����HZ@S��7I��d�4��o2���K]K>��GfE��u�����{`_�+_��K_�d��g����;���~��d-��]��.�n�E����p�8/ky.�������,k-y��YV�"����E@<�54q����������q	��~Y��sq��"D��R��/e0����.��lf���d���D
� Y�K�`R��Dh�a^�.���Q��\"t��0R�K�.����4�1Z�K�q�f\"X��/s/t>���F�����Z���MLt��O�4�9 ���?
��\Or==:
�z7,a7��f��\+��z���( ���d�+{�����*��Y��8W_��k���3�C��cG��Y�\�.��C,��O�_��s����^�r��\�D����*��L�U	���.}�����qHz�Xz6_��.��X�J�����M���;�/(t��T4����OZ�MS3��V
��c���30���k��Bro���o7�y����y!�j��^�($��#{���]�z��={�� �)@	
R�
����{�7AB8���t�Agty�q��h�4~|�c�J��Q����@�����A�����'w��Ie&-3�@P9���d����xj�1�G;��L�9�))wP0D�^������x���Z����o���p�SPhR��8�#;�G%��EUW\�5Wl�[se��5B�}�t[j��i�U��oR���Z�";N�P��]������9��r��y��a�t
B0�!n��2E�k7%�Q�����&��nJhjO]3�Is'A�CF��Zw��6=*a���	�j$oi��cr���t�E��%y��y�T���{�����_����'b`������VV�.r��VQ$;�e�biN���d~��F��ttt��!��L+��v����"��Og�����A�j���J��\�c���94�f������������N�h6v�����I���=���8@�f���:N$2<[Y�����r`zXB`���/������YxT�Z�h7.����rh���`p��k�+�A
�KQ
r��@<��d5��O�x����n
�P�S��.j<���E�Y����`"��=X�^��#���J5�����s�&�%A��l�NvkI�<���R�~����`��B�}�p4�/�3�~�u���
X�d�j�z����GA-e	�������B��Ga���|j��@�^)h<��#������kB����h+
�0N��U�;���Wk�oh��c����������}M���p&(�N��p�x�6�4�7\^1h1m��C��K��l8ZO����5	�p�
e��f�LxC��!�SA*N�T�`(�~����A�h����c�$(X���F���W�������%1��p���3����Lb�9p����?1���'f���T���&����~����k��V��i %L�U��i	�W���&�@���3�z��ETY}b�V�\����N�4��:��*M�T�����.1[���B*O�V�W,���'f���V��	�C*VCb&|���:3�lJT��	p
��^w�7�&��������r[�M�jyu�����#��a+��#����ppy:���������n� ��=�lh3Pfnk��re������a��V~W��	m�!~����������Q���e����
q�F0���Jw Ll��`����A��9��v���*����e6���������r�s�M$�&�tpL��6z�J�L�����c%������`Pp��Dvz����eAx�F���u9�������ldzN<���Vz�v�8	�Q|�a���8�?��2�	��w����x�y���(��$���0��&*���~�C��7o^���g���w�I��H��
�G����{��u:y�V����/��G����u��]5e9�xn�� I��61������^���,<F�����]����T�>#-<����� �~�TY�@!�I�v��U�#��*����.�X �L5V�z(�y��z�]������u��
�75�rE~��5s�}{k���f��ha��0��Xq�V����M���?v6jFq�������K����������/�$da���?�����<}����W���}���?��LOC��F�T�DUa�W�^5�
SW��,�/�)��x�4Z��#e��M�����sv>rx����X�N�
��K����I7�����M:J7�_�S��C���U��\�J��r�r
��7�*�c�)�|W9N����u�[s�����5�cx����
��N��Tz����$n�96.��G��e�.�d����9�q'9���3!V���V�����9�6��I��I����4�x�#��Q�y�����]C/�b��=�1�>��a��XUg$Q�#����3:FogI8��/
�0��:��������g��(h�$����'��SCO�Kt}g�A����������~TgUU�U�$Z���$^��#=y��

N����	0��R�CR��CR`���H*���ac$F���&�_b7+}� �i�	�'�p1�Y�fl���&�X_�an����)��3�k4S~KF`��������\y���i�������i�O�I��1����+�4�Y���=/�/����+�=��2�Wi��>S{Y��_����J�9q�%g�'�&��R�����\P���c��a�m��c\�~�e�>��8"���S\
!�bm�!�������N�4�/���"����)*9h������h��k���8�ka�c"�)�g2�[*�Q�y��jc�H+b��
����������1oA<��O<A���5��G��Y/)Y��a+t����mE��Jkx��}���gL����q.��5V��!iG~������J��u��q{����O�a���\4Oca�]��)�CN�����t_|�L$c'��bc���wlI��5;�i4L,T�
"�s-a�g�N�P�R��0�"�0��������Y<�����OL�`�j�Y����|��R�����-7������5����hWN�Z��B�b`PE��^�P�<�I�s���
�H�`�Y`^��xh5	*���5Qo�~^�����k.�<��C�#(\Wk�����8�����y�����J��V�Y�R����(���*V�J4�&q����
�BL�h�:e3����i��r7cp�$����-��EWJjF\��6�`���������'�p3��)�X�a��B��vs^=!i����Lr������`	�LnA��2g`�����
�����c\��'N��j���d.7��f���sa�]��f���a�`�]f#���QK���^8���=V;��h���������tY'D��>�D�|�N�D�����:��;�Bx�Y���!n������8�}7��7��7��7��7��7��7��7��7��7��7��7��W�z��]�������he����Q�g=Z��p����H65Z�h��}���G��h-����b��#�)�k�d|��;��\�7�?�g1g�~m*K:���"���>��W���*�Q^�'�[`.��&'l�T�����TO2,8�BVT�,d
���*Y2O�L���~n�������Y�Wu}��HW^�L�H"�d��&��>t����uf���75�h<�A3,���"=���I�K���lomY�fin�m''L$`����_`�c������
��p�_���FE:��~�d�g�8���h�t�F�mQ���L���	WA�T������>b���&�+O�,c��Ux�����+���T2�

H/��!�#��N��B���#���]�>�	�)b	����7������B��d_Q�����T�6-D��fL�<�t
o���������H�uV����E@v|I%<�X�Ij���G[k�5mJp5^]��z����<���$��%���p���zM*xS�Y/ALd\3Y��/B�J$�\��m|����,W�i0^��%��|~��Nrt;��<
K)b����2���(ow>�G�9�
�6�����dN_����a^B��`�f��_��y��t�9A�a��������/u������GF�	�	}�
=X8�##�!�����1��i+�N�i�q]uB�s����'qtc{�����'� d��`�����	����H5��1I�i\��3�) xS~�O_��HA?~�<����w��46|��z!��:���8��qk0�(���7���$%+wj�����T�>��P�L�&��4���Pl��jZ�@��3���V����O���i$����o^w^>i���:��O����g��{	��!�-�\)`i���������������2�*����2��3M�����u�
�I5V��Z�J����2��z�������2Zf����8��D����FTy�*iP�Iby3O���4MW�y$���H�Q0(i>Q��G'5�
�xTQD ��)��������i�����1��4K�Ef��$�U5��9}�� cf��d�����AG���g�l��J���>?J �U��Z��4k�j��7�CR�3�ar���A�B��"���l����a�ER�����vTE������o@*�0uLb�X���" O�V'���|��0������CSU,'��g�E���G7__<��re�Nh/7�^%�W���������'A	�k���5y�����lt�V�vi������������=�x����/��o_��������������}��������|%�����?0�z���o�$��x�&�^��e�SE��.:G�Ax��.���D���V/|0&~�����o�7�t��Gs����M��^k�������O�3��I���)'S�\��S
20J�2}�yy��~T�q�b���N������1?[i+����l������~����1���8���0��N��WG��Y�v���v��654_��e����m�zu��[ti����5�����'��}�����Y���
�w��������:�0H��ZTe��:��u054ndN���J��L%X�r�Nu�/��c�V� ����� ���`8^��Z:�������4�N�����C�q5C��Q�1D������Hg����=D��6`,~m�5�����{�����6��_�x��p�tt9v���\�lq������2j7����W5�(�f���hw��Ms@c���!9��_�]�����p��^������y?�Iy/��	�����{�����[?�����������c�����3<?�����{1~���S�i��<�z���v�?��g{O�_���q{l�'��Tl?�L�8�O�A�f�7~M&O�,������&���Ap�~������4��{Tb������<\����{�8���Y��|�U8:�����A���������@At$��V�4N��6��
r�����Ze���^�ZM`o�m�#�&�zArY����^g�F���k,EM�sh+5�@Q�&�&\�Z�c,]LJU�LX�&�+n����g<<�������Z���S�
�Z�p�9����B��D����������J���Z��VK��]�VZX�c�Mc�����:4�#������tta+_2]%���n�n��<�M2K\���T�1^��w�������
�c%<�
�B�,"��F�H���I�o����QdY?K��q^f&�=b_&�X$�@H�PL�	$@���&���Br���]��>���bY_}�Y���T9����'������MT���Vs�J��on��?���E��3���w/��?�����A�����m�F<#��������������K�OS��u���6�����:�����w�'����^]��>��B���6?E����S���������)�S�H�������q�m}�� �XC��W�|��S����mQ9Y�R1���p�>�S.<p���Y��\@q��B��&5BC�"��?��6�������iM����O��f��d��	��R���"i�����<|��XR
��������*I�����R�t��T#v;�p6?0��"}�
LJ�9l�tH��@���}t��tYL�vJ3n]��R��<�CoF)�`J�	��}�kJ���,�{����l>A����0c�}9g��}����n'�:W�7��H���8���#��O=.qw�A��n�H�,�F��J����)���f�q�����,4�m�;�!cw*q����.����vDnEO�K�/���P���������y����JQ�2F@��z�@����������*�^4���h�Uid1���Ww��5����F��b���A�C
j���_��iC����j:&Sw��N'��*���"����S/%��z>Q��G��8��03�bj8E�]��!)�O�S�km��S����?���t]d�4�i�#��l����-��a��d*�������3�E�.��~5���$�_O'
��_��L���l�%����f4�����n�(e�u�lg�,+R��x{�b
u;���
���W��t5�C���#SL�=m��BK}F��Py���B��F}=�8�P��	����	e��Din��4@�&�{��N�M� c7����
1y|�S��7��]�������:<_2r3������b���8n��IP�$��~��$"M"�������W����Et��5f4��F$�5��!3���A^I�j�Y�L������	��n�����!r��q�ZM�u����G��P3���)��{
����7F�z2	��/wAL/��A\MS<`���!�.F=�j"��V��j��y���c��4=zm1}��k|���)�fDF�&��A��EUi��5g�K��r
��ci���b� V�k�s�r����~���9������?1c���B?g++���n&��w�
�>�u.C}dB�w�54��Se��8L��HKu���������F�4�����5t�pY9uO
?������D0"��ki�
�	��o
�H���Su���Gw;�G�k��K��V�����-��}L�<��}TDTdG�p��|�F��|�s�`����T�����G3��"23\�����J�\�Q�	�iB��jRu;�N��q���P��=jaN?5�]XG���c��FRL9!NW� �
&<�4L��I�w��^m�ag��b"�!�6}C�hy�V����,$m����Ls��2~Tv;�����-�&]��t��"C�
�{}�
�C(�p���&��y�7`��������F^���z*�S_��Y���6`����<ZW�A0���I�?:�c����������g"����s�[��y��.G���5��v�S_d�w���Hn[*<�Kj�\�E�A�������x�P�#�z�.���S4��y��c=Ekdr;Y!�����?|��&��qJY��Y�u^�YZlp��[g�k9%!P���s���Y�<d�r�
e*��<�g�u�b��GP��H<U������>��e
G���\.ym�-J#�i����l(�� ����3k[��!IO ���@�����rG���e��,u������
/$|�s���kBSE�#��J��]O��O9���I=�,��1l���dI�"��Qw\
��[b��YNgs����q�n�{/�	��������A=�!���A���`)��Q���p06C��z���H�	N��0���@[
n4�SW'�xq<3�����9~:{6��������f��q�F�#<�]a�[!1�S���.U���mBjZO��K�s	�E=�B���v~�����������k��A����.��)��)9d���(
���4�����E���C�-�vu�0������V�)[�V
2nj�d3J���W�^���K������c{��(u���@o�>�(�����o+���A������\��~�s����-�V�$�`D��c��������!l�N�k��D������m�����iP:�?�i�q7�V�x�
���M�;�1��E�K�	f��Q70�dw�z���c�7��mW������~!�h�[�0��>�5wQhj����"Xd�Z�=l�ZA�E�U�����ge'���������Y.~���N.���e������V��5�+�%SlND��~+\��k��7(�N����>���Q�7%l�Bz��dF9��$i&���Yh�Oin,\�^�,#��,�N��tm�H���J�xc�
�c;��������v�����Ka��.GY$q�d���K_�g�4�F���+�?C���gL1�~�zFJ����n��'�Q��O��H�7_n~�
��2��;������.5�g*��
�,qB�d@
�,iYd%�k*)qE�����p9���ClS5�`��,��>���Qd��t���V���\���������M�kE�\��;��KEu[��d����w������yI�!���E^l7���"��o��n���}?�d���}����)�C���=I��%�=�����=���-,ZMF�#�D�F27G�Ck|��$D�M�(d�y���}j���u;������u�J^�S�.?G���5%`P���n�|'����:���5V��a�H�V^�wP4a��o9FW�������{�����
v;T!�)������q�CV��f3�w%Y�3U��7�1i�H.���y���|�Z����c�wZ�lL�v���&�zQh����|���y)�a�����������"��4M��E�m����7V�K2���g��Ao���uR%+�g���S�M�'�^}p	8���#	��-�����|���~�������cxq�9���c�v:_����s��~�9��vX����"T{S���Y��M�|������V�[w��_����<�����������v�v�����zj��i���8�6.�"�^��{)��(�q0C��Q0�!���;��Vvaec5��|���F
�YA;��51�u�5��O�I����?������<fm��;��MA�@���'h�r��S<0r����:�!�i�I����O���g��=
�?c����^~6X�������`A#�s���	},
��������>|��G=�K�������F@`�j�o.�oA����\�F������� �@�����	�����r�r��Q}z��<��G<��������v�O@�P�7�����f����;@�I����n���7���;F/��=�����,�Q�B^���n5�3�0�w�?��������)���V���\�m��G|��m\�
"
�����3���IXQ�����	+
�N���VE�����.��[�`�Q<�C�R	���h2�Al&)�%r�3�@L���6�ly�����u�����m���n�r���UB�R����LV���xZ��*�b��O�����:
���_.��������C�?��x���?<�y�%�!��'�G��1x�]��>��c>}[��:��N�h�$h���O��}r��WJ�����i� �o �^v&�)���]�.6�M�L�}����@lE���V��M�l��k�l�~�v@�@�@�c���|9��D���<?+x�}����/���o�������n�]�ts����e#���w���pC���O>o�`e����
+O9���������3�������o~�E_��������*��@��Nc.� X&#�Y���&��t:��zs���W��<��>�V��)�k��Ii�#�)L.���A{������{n��ha'���o��,�W������U��:��E��,��6�8��bQ����2Dd�:|8<{z��.��i��]\^���Z4�(��
zZ��W����������^�7��I<����x3'���'3b��a�]���v������������o3{N7	�	$!���n$q !��?Kv������Uq����%�Y����O&��P"oo�L������D���j�D.���FS��zX��P�o�Y@-a;���j�����f"O��,8��=�_L]�=�j�g�R���
<`X\0���A�	�p�,�����������������Z�r����,���$��`Cu]`Q��������>���c\?����3��B�23�r�u���CIJ#��o0%�v�O�p�9����x��<��6��RK�t,5'>f
E8eW�� ���:P��v����p�V�C�O���is�F_���7�� �xD���L���������|R$�k&s?U��0P����V.�X[������X���_�q�*�my�g�����<UX�\8r��t��@�I����SY�O���KI)`�iQ������K�7�85�c�$n�%�����q�
�S?��.+j�p����COO+V29���qzr�2��}s;���8e[����h�����A#���`���������2���}�
���
��bz�F�p;I
�m��x	Y�	4;�9*�7P��n�����F\|4Gj�:"�NkL�P��J��bM2S� ^���,@"�I��1�jpeY�����6�E����e�9*uO*�$<��h=�Zxw*�A�k�O�0L�#�[acB	�������HX0�:��sL]7��jb������������~�������!��$�6��?o��������m9!�Le!6/l�T��
'E�[}!�@�4�����>�K4�rt�*�)��8!G�;��|<���o���o��!����8?��U�t;��q��q��v����k"%�����9{4S�`�*'��v�V()DF��
�ncm��t�c���%~
M0<n�{r;
��e<�IS����0�������0�e�U��T��#pS%_���F��#���|����b���W��$�.�55��+���`���Z��pR~rr,��x��F������N8�S����N3n,�*i�
w�N��(�D�aS�HV�a�iE'pK@T�b��s1TvZYN��9m9��3G\*V�5��`����i�����]w��$X��Q�����tg�����k�������a�5)B��0�v�4V��j(�k���|>�i�'xcG�]V��K�:8��|���.pT�'V���o6�g&��������l6�7d�fv2��WO:�'T�&Kq�a��w'�h�P���*"��>���2���xC�������i������j�-�}��@6�fLls�#J,%���c��X�|9� I3�.�z�����h"1�n3��8����98���g���{{C{��A���[BEu��S<���X���`c��>�1��
�2��sm���g6?�����s�z�7�����������y?��`�9_GN,�X�����	���x��J����v��&�$#r ���]���T�ln@i����YN4X6fd����S]2K9�|��F����q6��
�-�d0���U������T�`gJ��TZFMy��dC���./'G.#4no��)qX
&���p%�Z��������a��t�VSyX�(~<f�=��w����y���a���&1�8'������)��X�j�$���]n��3g)?G1�1�d����f�������N�.���b�XN����R'���V	��a��7!��T���M�)C�cBs��Uu���+����-�O~
����L����ri��736�"C27��k�$ie�n	��t��4�FnP��2%`]A��m�\Y��� h"'���i����!ci��aS���`��`GL;G1qt���=����W��t��S����b<�	�	�Q�^���wn���2�1"���Y���"���������V�&��	����y�qa�\�T�Fr1r��p
�*������������&������[�6QT��/�!blx��3]�:::]q��:�w;�N�%����O-�H�F���EM��%t\�'�FY
�|y�>"�
�$$3��F�����v�"�����Y�!��?=M��������{��T�L�T,������fq�)��B���H3���
I+~��#�T���(H�"��%�[J�T��1����
����]nT�tH^T��b����;��F,�P���[�+��z��*n�_�3���A%���GQ@��i����2�^�	"�6W��Y��"YUs�"R5s��4�U�_4�Q
1}/����8Q��J�!#q=�s�.�����|�p�e(��"p%��0�1�*3����3�ka�\AkC����4_����8�}�Y�8��~
��q\�c����OP���r��@��08�y�������yE���I�>/�]s{���xh���n�����Y
25"b�!l�/�#�K��8XQ!m��n���6���6l��7>N:m�EJ�Q$X�����n-�������pO:LD��I�mO��S�lY����`Y�G���'�!]H��U���%���KP�
���L��+�tr�7zT�l)�;���>���9|2S���s�����������8��,mW�f��v��ReU��,N���u�se�rQiu1��e��K�v-�p~@{�D������C�!>���n��32�s�k5��%A_��/>XF���2c�0k[������[���i8,��E"��X��Y�6�q&��0�y��O6�����P����Sa������TX�����TX!����TX3�D`{*����D`{*���<��
k�*�lO������Sa
�YO�����'�Sa��&��[LG���7��rUk.�Zg��n
C��MZIl�.����_��p��N+q�eX���.��?F���+y����/�U2V�'M���$`��3���-���0e���R�r3��s`��x�ny%�vy%��WK�_^I��=�{�XM����o���=�v��jX�u=��)#
iS����e�x7j&�"������N:��7xJ"09�Xa����Y����@�J*i�<��+��t�;5�g��n7'd��:�Cs������=��J@E3h�����N4�z�2�����I
f��5a����f�4AB�E>��c��F���c��mm�b�M�I�Z�t�Z5�if�-�@��>�H?���2�$�	�.H(
�?��*�"n��q(0�HPG�)����6L�!�6��&Hb��4��Q�h��%�(h^����K4 @�����_'aF�j-�cg������h�\s�S�����XLs�f�������0[";��B|���	I�'��Y�u������l�G��J���g��y�v�J>y-f�#con!�T5$��Z�N�>,i��U��`�+�s^��>���������r}���W�!y�M�)8I�x�@������G�����(@����-�vR`����M�$H::,q`�<w~7a�M9u����8P��Z�y��k���(Z5��u�tDv���<P��<��R��P`�p^��ph9�1�����7�(����b	�k��b'���mtk���-9�D,;{�B���j���n�,��-=���;�0���	3��m��ixc�L|�`���y����-��R
9���X,!��-�y(�|�8�����;w�^
�r3'��([,bT����q��s�5�5W$m��F�U��X���.(;�.�63!*
�h�CdA0)v����a|�,H���@�f��R���%����sZ�Q����3_�nt��?N,�/�R?���t���j���!@��M��F�'�VI@�7�a�S��Kxl�y��Sy��ja��Y�2tu��\��no��FNMi�:�~rr��;R���wo���y���~90�HNd�?������L]@K�_��x��p\�a�)y��*S�p}*=,��7!�b�E��h��y��
���
2Nr���T�G�I��r ���y����B$���lN��f�*V�TSQ��N�5�x)�F0�|�5��M�!��Y/��2�i2y�K*�&WY:�F���=�bk8��'��
N���)���,���&�����F;DOFy4&P&�,�`�f6�QQdbny�
"V��/
8����=�0�e�%"'hMM,y��7I<�&.:�V`�6p0O��z�o]> l'��XF0�w{�h�;bQZ��v�6\��i��'���l��wH6���B ����*VH���QI��P����cJ�,`����u���E	��'��$�sy�i���x�j&��-�+#��5�����#�E��Y�1��b-;>���m�x�uQD����T�e�e��N^�
��
��gYF5(x��,�{��ckL�1��s��Gc���,�0r�N���i�g"�����CC
+��t3���;m&K��Hxs�|�^xN��x�� �a����b>�����raVmx��j�v���x��$g#��&��G�s	�X%6������x��?t�x�B8[ON�Q!*G�pc��������v+�2M�2����X���Ps8�`t��a�)�:�8�(jZ�D�-2���D-�u�|{[��>�VN�p��.n�([s9�.S� �UZ9���Ow�,�� d9�Ii�	�'cy)�Q����K��	o�8���"��]��]��]��$��~��v�iA�[�c��%�� �
����"�T,�T����}�.B��.B����8pL�]�y8�x�"]2#�����7G���u5WX��.���/S~ .H������d4}>p�D�SS;��*;S���J:��"���0�A0���i����l^�it<Z� ��qd��\2a�"�����n�����}<s����E��F1�*�?�'NU��Uq<��B���I�#��Q��t`����R���
�j
w��?	)c�Lg)=�Z���*������
]�������P�����y�b]��x���`~��#�0���Z��wNb�L�����l�C��
���i��m�R�����K>i�r����sGC���6�\�Q>��M�1>1�`?��������0gH��B~��5�I�%�`�7-��S��gHlS��,�Y����hSK�p�R���j
����;���F�����9�v	I��9:��;tVRP'5*��cA���P�IlL��������m��	���0Y�@�F�K��8���01�F��=������"�j����3���j�O`K2��
5��I�I9d�5�:}*���Gt�n�x4��u����E��O�z-�&����[��S���=_��&�.�c+��+K��$I��@Bj�N��:vI���N���C�v�:[ �G�����t�1'������~���/��d->�����`S��v���y�"C�d^V�1�k�e����R��|)�n�:��q�N�}.L��<�����9�\�+?`#9�5i�Q�UF�����Xh�r�P���e���K��F�xr5oyhg ���`�P��7�N��	�6k�h�@���T]������3����:��)�����0�)4���^�.�I�>�'J47G����E3�*�tc���z�Q�Di�����f�x���N�
�6���������b�s�H@U�HR\/�|��N����&�N��=�@*��O3lv�d%���i�i,m����M��������Rg����������n;�@��� |
����F��NF�l����U���Cj-��u!�����h:%h�6���7�fJv��&���b�1�"N'�0��i�	/�pn ���U�E�:'���U7�c�)�eFq����9�1�i��4�.+�6�����<*��f"�>]�y���������S�/��i�+	7d��(���e(���n��w�Nf���T���3�������g�US1][WHK?t2��RtM��o���@��N��jk�v0;���"D�V�6��j��d__�{��!nE�j�v[��0��IvZ�
�:VQ1U'u���<^����;n1�7���
�;��j�W�c���G�Y��;u�H����K����6A1���j�4��s/9Gfg���x%� ���"�*������8>B'�n]U����nR�{t��HT�%��'vB�����z��E��e�LG+$Y���q���c��Cl����(��_�+�&B�{��b�M}7���	wZ�%�����:����D}<x��Nv��P8I��xP�'��K�P���v��w&��-�-�no4��YK!���Qj�L��%�Bp4�0v%:�p�Vey������m��w�v���K�$0k�_*��\�:�y���}nr��h�s��'�9"}�-�zr��^��P��T��/����~�������L>Mwr�b���s���W�M�G��������#r���W��b;����;���jdN��H�-���SkmC�6���1���Z�����8!q�����I#�>��%x e-[�7+��s��2��LsV�j)v�����{g&/C��`E�x���M-X����n��'���n��
w�-f>1���1r�����|ob��\a��vQ����iwO:C
���.�j�����e?������N�)(�q�������B�Q������l����a�Wb
��l�&����h2�-��}{[�������c����9nO���:$���n�;M)[TV��H@+��8E+��z%�p��s��
�d�1�;����99�#cnf$��� ���Y"k����R�����\�8�xN��f����'���f&PC���!�z,���'4X��1�S7��<�P����g#��A�,u8B[h�p��y�p+�����,�f���1W��BWk�UN���4L��i��g;]�x~�qN1@Q2���n������c�t�]��*��Q���N�j�TI�M��x��`;�b�J���p:�������uc�����TjT�iP������X�O�.34�O��.;����������z|Bt�	M�!����&)
m�0i���8-/y��-~V6p2���73{V�� u�%�5l�3{g���].Q]���d��N�D<�g������|�a�z
�I�����,o73����h8pu����n3���=����h������-�Kw��W����v�O!��w����`!��5�����n��6�"U�.�*�Z6�K��G�m.�v�I��dr<U1y����t��O�����FhVC"��P���T����s(�����&,�
�������	����mouC��UH;["�7�����$T��������\{:f��r�h��m�J}�1��Etxs�7F����f�Be���t�� ���8�[K��i_(�Hoo�J/R`����)X�������sB��@
2�g��K����9��R ~�?��#�p5;��?��;'A /���g��6�g���X�g����R.��N��N_�N'�y�c6r�6mg�
b��9�2��QM�����`5���D����%���G�����1J��X�,�)G����$���Y�%C�F��g�0L|{P+��j�i<��-p5�C����������g��/&=+�z9����N�I���LrHsyX&-�����W�Le�i�Gm���4�V�7VUL�U�[m�M+�9u���3�1�ny4�s}��*�L)he�+��q�<��� 4��[����O���J�B5���O�N��,�O��,O%jgQ5�@�g����+�@�54�-�9��3��lyD�-�l����W�}�zV
'D!��#N���6�y�s�-V�<���q�9Tpmi������I�0����	� ���b�w�t����`��YU6�7�
ZX�~�O�N7+�FI�h�N?��=��jk�G�L���
�����<*[s��2[[�a�1�B���Bt��
�#aeA��������v�4��f�I-����!����%��/��aGX�S���[�`���Y��r��i���u����G�� �wi�(]/J=;���=}A���e����������(�0$����p���W!�F�o[{���>|~p�~����?z��+�?�P����=���!���Q��|�/D�L���gq!��]a	�D�n��*�$�p�`#�Z���G�[��f�d�pj����f��D�^IH��>���n��C�o�%����b����u���S^���;�&oF����L%N��[S�Qt������i7�w�~��=p�D�Di?#���Y"�6�L�Ix�<��p�^�~+v����!������`��keZ
�	��n��N���y��JO�TR7�p�D�u�p;}�: ���cI��]i����9G��v:�?�9<�������:/,�*)�?���U;0g�[G�~���	O%�f�k���y&k�R�*ui�X���Q8���������B�-�'�a;�0�7��eG[��v�f�iPP�X�\�1�Aa!��e/�9
�S�M�x��Cg|2�M��I�D�I�>Ny}
�7�QgI0yi����N�`aH�5�
�[1��6���n�+�H�xU����SH+g��A�2���������l5L��f�njD�E���Q������7�J��L�f5PD5F;��
�H�vKjhpkv�����W�G�]�W��Ln��c�y@b�X8��?N�V"M�*�L���Y�HP�s(�1������)��r>���������������{Au�Mg����A�yE��V/�-������[��/�1�������C��f{�4%=��b������&�j��hb�q��iD����2o�Q���j�(�T�������<�9L���*.�Kc����j�s|��1C;�`��=L�,�W1p���������a� Al\�ET]�*!���y��^X�$���
�5;����m��9z�t�7��U>?����a"u��$j��
�����UEB�Ev�G&����q<�Y���j!�(��%���a�A'���L�LFr�G�U_���I�wd�.�=��\��ck=�$-�
�����rg2��NE���E�=���v2
{�au��#=�(5J����:�e��;�^�����*�����������@�(7v|{��3c~�������hM���M�m`�i��v�DXj� ��'u}8b�BC7Xa9��H�XL3��������v�`D��H��7R�]Fm�B���a����aUB��C�65X��jD��8#�0�����
�Z�QZ���![A6uB������x����Q9�
`9��-��L����h������c��������8n��\�e�QtdMZ��,�
U���h��$r�-�����"UQ��i"��8< W�*_/���j�+b����������X�e���f3���RF�x��S����b�fCl9�,L�R'6djX��0����B,�������q���5w4�VK�Kk�����KixB���9m��*��+I�d2����n%7�*���e8�$t�k[>8X��P��.����j"���C�T�����4��hSF�67��(,$�Q9*p�T�C0�.7�$�Y����d�k�i<�
�)��n�)���S�6MV�f!Dg�q�
�QcCl4b$c���S`N����C2�eL��C�6{I'kO#���+6(�����fO�9���:za��@P�w��j���6�� 8S{i�������v)���1���p�V������c|RK�D�-%C�a�:�A��M�8����`���N
=o�A��Cf���q���vk8�	p_<r�|ZOT������x����nm���05����I���j%���X�n�fQ�FU�{�	����J{�\��t�{�>\�!����s0�9�����qw���qht�=�����ug��$����X�:VY���T�+��:�;T�'�&�4P+���;�
u�L��,�U
y�
�'��%�-c����VsJ���75*f��ZaKN�u��2g�zC��b�%<(�jn�h�%Ry{�Zy�T�z��D*oo�C+/�J`Y����H���sh�%R	������}�V��H�H����iZy�T��Zy�T��<�V^"��7����H���sh�%R	l��Vn8.����^�P���>�[w�������"]t�?��Wk_�qy>tx�IH�bA��L��9��Rr1����Z\��D{�Njz���;��%n��1)u��V�s��m-��ij�9KPE����U���f&���{R��
�8�^G:��u�����=���*a����H\�	����D��
�p�A-KP��9q���%K1���#��1���|iL�����3AE�>��nx����V�^7�[���hDcbW{.�1�r�K�����N/A��b�.����7�x96@ ��Z��e	L��U��$(r����a$�]<LS9q�e��B�����������)��,���Y[��Q>:��Ee()D�7D���u�l��joXz,fU��c�lW-�.
D F�c��m �B�F��d�`U3�+M��	�t	�0�5���k����V
F6Rnq���y�@+���J
U��4-��nCqcr�|1��l��E����S3��������2��-4���v���B��[2�DQ�D�;[��O��*+�:u��j��U���2����$NC}��X�2�L���Nf5�T�����bi��x�,�1�L�M7���T+$�Bi�q�6��1�h2S�hQ�#��������*&�i�u�1��sX��\h2j��l	0R�g8��`Y����������� ~�U���q��M����D��i�k����Oi�k����Oi�k�:��?�u�M�@��1�{mR�����I�|��W�M���Yx���������n���|�����6m����������AIB'�������q 
z^H�vP0�����	(�9�������H8[��� �������OY�^�����J�36iq�I�1�\���6�p���N6��0��D�)���i��CD>�+e9&�+u�l��ZC
�������%_Y����;l)�{d�)��b����BKR�C�Cv�
Ud�!�a� �Q��y���"�k����@��`�N2y����|J0;�H��@Z���HiT��/- �2,����[L&�/�f4]b�u��n\�%���*�?8���t��(�o�IA�+��b\
��Y�B�w�h�O�R��A] �?5�j6���J�W���Nq������I�� �7
3�����;^!�bX	�8�1�A�I��q6�N�4W��J�Le�xHx��v�6E�*���P*���a8����Q���#��
�-�$�������XY��Cf����jno������[�V�Jg�z"�)4Bg^#n/���H�U�}����(��
c
,��9f��ia����K`d�~���vG���C)��
��9N�h �>�e9��9;��v=�7��8�Dd��X��m�r7>
��Ml���0h2X��� ��$fX����&F����8����mv��Tm0�tq�q���>�y�R����1��������-w��j�'�S��n�F@�EQ.f��$�,�z���F�P9*�����8�k�E%�T���>m�cl-[u3,����W�2���&Ghe���R:��`z Wn
H����V���#������"O&��'G���?wmx�����t�H���s��,��L~����@�+�fr�V�z25�jdhA�����0�jk��g��.��f=�G6��q���[�p�?�PB����������X{�f]U�$U�0����sn��
X��n��M��!f�V*�L_�V�zN[��S������o�b�����!2Y��������������@����_C�|'��f��&����(F�1W���N������H��P"Nl�1�q�O�3�6s[5���-teB
l���.��%����pG��-�
y{s�'%��-����N]��E�s�O�*���S{�j�fj� ��u�<?�H��`jN'[�IU�F��n
����Z�P���Q'�����nBa��FM�t�V����H�6���U)����tp�5����r:8i��k���C6�p��Sx��PDS��
t2�����z��D��Z�4����`
eG/���t���'4m�]LO�F��*z�xDh��}e{��x�IO�2��5�@o,&'Qm5��|,L��bUw�p��&��X^�8�����oK�����k��-�Q�C��e�7��}\Oj����\},��cH����k�T&�Y����1K�zdx���
$s�������i�7��/��Po�!��c�P����,iV0�����x�
�U
d��|���~����Nn�]��#�����������ln�5����bK����M���\<=�\������$��B�OaF�J�d�������^<H@~����\�]�[��!�@���:GA�4���`\D�t��m�9���v�b����2��Vw�G��A�X�]�6�0�Q&��S��fU"c��;c�U��@,b�����"+���N
�{�\��)q�H12 ��L9�^Q�s�U1�\��d;�h3;�M�APtl�u��!�9���
�u��[J�S2�	<�I�p�8��z�i�:G��Y%��K�`O�a��]�&�|�q�/��f����9����fO�r+47��$oC!�yn��3x����4���O&#��x��|����{��r��x�@��)\(�27�]Y��j�tj�1�3�^�?���Q��l��S�5\�!���Y�����#��E��x^�
�p��(�Lp�L.d����tI�^���<�UMEx�$a�����H*uv$e.D$A��9�6t�����Q�����<�h�
�N[���T�
~��6L�N��[�-~��g>7�J� ��4D�F�SA��QS�v$o�����i`��:��&���bT|o`c�,o�n�o���������J��{`.���x�#5W��C�vSZV6��N��f�����fU�'��`�E/A�u3� K��oV�e����M5Z����u�}��1��S�t���S�Z/��5�IC��g�<�+x7��	>���$����pt�����0i����"���@�%'`t&5� ��NH�c��x>�zD�{�j����bhR?(��.9�@�2#C�%����3F��!6n�-c�!�����(.��
��bm����Ge��hQ�����&f��C7rG�iT�]���`;����1+k�i����L!��\	%<VF�=^nH��LQ|�H�O7����a����V@�	q����� #�N����s��N�??��������0=���g�:M������,:����/l�q�Bh����x�6cV�������*,Ud.�����C�������MV@��:xc8,���R����.���F��x�G��+�6�1�U�7�(��k�����=(m��b���i5��e��#v�N�l�O�����>c�����q�����Fq����J�����E�����a���5^�KZF'���67��)I�1�����(��ye���	�
����DwL���W~����X�X���Mp�~t��S]i�D�13����(�*Z������u�������:!��&��4�%U
���a�������@�R��)���#��7H�r��v���%>lr�p����<�x���_����R��Nm�'0��Vs�[A��'|KuKx�G�0�h<�v|k�Si��>���uX�����b�_��(C=��edySO���@��-���,����[�e��Q�T�2�E9��������^1�]��b���6>N����7����z�A���H���N�Nno�LK)uZ~��HC�!2X.���n����5�M@�p���H���'�Cx��#��#�s������ +	�H3S0�8�����d5r*c�� U[D�S���i��8�����t8�u���P�Z��H�d-I���[;��f��0���P<-���P��&y�����.�6�
Ou�1G������g���[V)���������o�����t*X���RP!G|���f�N����Nf�P����d���g��4��wh�����`�{2O#�7N�������xo�iq2�Fb��V�l���o���J;�I�|d�d��S<:�`N1)Be��U��~N*�2���q��.�1k&��;�B����3w����*�_��"zK%�X�d����q��K�Cm�H���l�����i8$�t<�v(���E�B��q�h��������a�e��R��/Ne��H��S'@�����m�-�ex��7C��$�������-�x���of�S���?A�u�C��A�(n�������p�65��Y��lS��H����XB�t��m��x���dDN�-]�{]��X�m�����@[m�	I����7�Mz���lp	���-����TDku��6N�j6B�y�����
���Z�!w��v�5$���`�;�����e����,YG��`�pJ?���y�/
q�k
���j���1���;��O]�����6��Z!y�3�=����R^[�jQB�h2���7.?��.�
����j���r�/�az�D��c]G$�J+�������	�����oo���&��/s-����"{�t����l�P�������{�:A����d��4�b�B�pT��)�c0i|�phc;A}���.�������2�����0���*1`3
b.�M�[Z���[g@��a%��b�e�19��D��%��X
����c>A���.	lplfH����jP�U���Sc����1�n7Ga�C�����������'>�Z5nq{#";����5J��j[Gw!	s���A���C+�����N��2w�n����2 �k4�����n������|�j����n��1V�����&�D�)��H/��%�y��A1e,�-�O�qa�����p%uu����x���0S%T�V
6���<�x�lh����5W���5+�'{x1�7��2�d����<=dQ�kK	qT���k����i.9�������Qy��>J.����tb������2�i�H����=9�1�<��TY�3\�*��#�M�����K13��d�!.�i�;O��:�'a����S'O�dj��w�jcRd5�,�J5��Z[����#�r�e	U��r�J�r��b���$��d}���SW#�$h�#�X��4f�6��#���P����8L��������A8��
 ���Y���W�&��;����(�������G�X�v��������W��J&���wd�e-3���:�f�������E����8Sa��|t�����i���Kd
���c��m�-54���Z�=�j�";�z����-F�xA
�9Md(�����V.@~��&��X,j�p�*%� �N�������2�D�o�����'���e������B�q���5{q([������	,�IS��aog����RtV���� ���<|81s(�+�����G1���,�����j�$(7]A�������5FMP����udWvi��d����.�p
E��R:�v5;�X�-}@A*��q��0��@����#\�����+�b�:f����hgvZ%v���[q"���R�Quxn��*�"�M�]��@�#�y��F��K]����7'/�w&�d�%��P6>�^|`3Y��V&����a��x?p1�?���f�a���dM�xc�E���)�Fb����K�6�ws� ���g+����5G�� ��\K����
��l���H$�p8����2�n�v�
�fC#t�`��*�-&����)m$�+C�<2����
+��`6�' p����z9"�p��v��5��!%~P�Eg6���?���VA��Z�^�V���*H1���s��e��1�\F���+�!�������6������p�.K���X��(��F��=<�}�$6�z��e�9Gm'D��kVCId�v#$��[�no��z {-������f������/�0iC0^��$7H^q�#�	%J�k��&�0�n�a7���&�U`{�/���N�=�Je���x�ez�9)T�ki~d��8��z����:���q�������qC,W�t����Xt��P_`vp;�������������m�J��%'��A��.�j+��C$
���v���1Y�~����b�_TB8ii11K|"�b�47�~8�����l�.�-`��'�����x���/�;��J)�AJj����$��H�T�����������@	�q�E�J��{i-�AI5 "������h��hr4:.���x^-�R^$��V&2k���g)����/�ahs��X��r�p�X ���
�0���*��lH���3�-�(�bK2���c
�P�;�|�e>���/�f�FL�X�������f�}]��Z��In��Bd�%��1�� g�.:��b�����R��@������9�j��a������a�����b5;:�u<m�M�T�f�;��JxMi����rtX��������,kdo�'!`���������[�G����*%��r��tK���1�&h�8�.�3���{N�N�.�q�}�F>s��u(�=$���Q���q�{����:��/a�UYQ�r�V�w����^�_���)��M9�����$��������F��<����=2�}A����Vf�Gf�7Uf�����2�=2�}kC�����2�����
����b�����L�0��vU������Y3nv�?8;_�~���Vq���5��L�,������b��/E�����`�,V&�b����N?�-��_�y�����.(�j_6�T����h��f�����*��������l����{'�k6l�:��r�z����������U��{�~�����`�7\w���������j���I�����3A�u�����}�39�%�#�>�����~L�����) >��0��"�����:�����&���!�D��~�@���Q�o�n��~�|���
>�T��{����/���-���n���E�m��q	��J�o>��H����}����(=D�?��K>��=��������������}\��;,iv�������7����wQxe����� ���|z�#�����7�:��em>l�����O\\��J<.=���KO�H�W��B�{
8�<.?����S� ���0����'����`�/��}].���>/&��s�����O�����^�W���u
xF�Y�{#�HT���7��{LL��A��~���g��]�7O�r�E�����.�'f�o~~q:�]/��~����ODu�r�y�'FL��7���8q-%&i��{qb�Az$�/����i����������x���x����N���x��d���N�tp��l���uA�{!��"���Q�
G�#�L�KB_4�'��B��{����������G'�p��%�1P�����H=�z��R�7�H���3o8:���_�������y�=���G��TYTaT��' O�4l
A^���g����
�,���w-��d�q_��J0���zFc�1�%���N�f�Y��2����5�����D�?���m������G&^�_�x����t���}!��03��.�d�g���)��y��g�>��;�w'c���rp�G�`����E����
'��u�&�hP�o(N"(A]�[La=�o�*}��G�����{L��x&�/P����6�=�y1���q�A�����y�x��J�|�EU\�r^j�@�?������*:��Y�3$F+o�m>��GcL��^��H������}S��$#s��`8��=m����|�����3I�C��
�G������|.h�������������l-����������3�7d���*���~u9�|8���w�g������"����_5����������W
�a������]d�_�)���O��~?�Gj_E��e����A����>�F�'��q���}���y&��XH���
{�PO,����xq��j��>��{%�ZF�=w?�b)��I���|Q�/�]����-���	�����������x��&�;��}=#]�����W���Q���=o�71
��}`���x;�����7�{"�{��ix�;S�3/z�;^����UAq������~�d�7��p�Q/c��nl����&��8�\��	�~���y8�g���U��}�{����^�a���������F���$��z���|V�
��O7H��+�+�{���E���A��j��@��q�2��j�����������+�U����`��+*V����d�'�������>��:?
}�bz����odD=
AP�j��c��W���ph�G�������1C�@�`I���3S�z�*�n`��2�����mr�	������q��������������?i)	s ����G�EQ�^�yZ�/X����L�u�K������w��* ���T��������������+������)h����[>���������W��|�%�"�	��gA�1{�jAP��q��N�������K����������������i-��e���W~�����~�|��4%}��������2o4.������g�D����G 
�qP�\���~��?�>�^u�m���s�_G.8&A�4��o�����������o������U�&�:�ucv����s/j���%��w��G�>:�!����_!��r�;uw�� ��_P���5����+�ej��`5�LN�A�G�3����g:�G"�r/���7=3�M���3�={F��	������-(���zAJ�����%)/(�k��
N�}%�-8��/�����b��2����P��4���]�%������C��^i������
9�����������BP��\h�bDw��/�|D~������H�����?�	�8~o�4�Q�|~���������30
�.����f�F���p0&�����*�^UP�U����}8_��5p�+|I�K���G+��J�_X<����������@P`��C}�plY2�;`���v�?�}�����&��N�WOX_ y������0����}��"?�sc��C��������b��}�����p�����%zq���5��q�S:��V
���s���k"��|�.|�p_���eO �=�}�H�b�z+X�����`���~ /�~��c/����y�y�����lc�=�����7T_�����w��<�������Ae���:0Y�5��}|�����S��s�y�����z�����@8������z��� �������H���(@d`a�WQV��
�"��wV?��m�����/�l��M�xm���a�f_�us>�oI}~��~�������g��9����y�����b�|���r��W�{�?���a��k��/��^Yn����������&�Jo����7���{n���>��o �zY��d�������o^
�?

^:�]|�*���?���?�������5���X�Q^��#8g0��;Wk^.�eP�U@U`v����ow�,?����)��X}�����K�$��O��^.�V���W
���������2���K���8Y
��n����L8�zp�USw���]����y��u��{�����SE����/����D���K���e��$�����O���9
��>�8�Kr��#^0#�)��YQ���[^�����{m���nN���F��;��R��������/��w��}f����������b�s�������1����������N�7`���g����G���v�3;IC �/��?�����}�����W��o������w%��s��G������_>����p����P��������������gW�FuP��T�"~��o���sG`O�"�
��*�(���a���a]�V
E���:��;����4��7��o�(��I��'��`����?���g=��w�����>��_�������4�=�����_��X�o�W�I���8����+���w��b���_Fno�=��]�����N�z�����7�)����;�w=7�h���P
"�o��=��]�`@c������������;����9����$@}_��2t��S����|���P�T0C b��gX�2^J��~�>�w�]B�{�����W��k~�8mj0�2��*$�������'���yB���Oci!��Li"q!�����d���
����k��_Ue?���_�����;��AY�qq��B�;c���,�M�������W���_8�"�	�(�E�/��z5������vi&����x��`��!���d�z�������h&y&��I]��c���z.&���\�I�`�W`�X��w���cG����;�
K����B��=	��)p�=�}5o�~M6�hM�����t�������1(�/�qn���6������U�'�����c��HwV�X9O�wG���������c���Z{�������-��|��{��+�d�)��q2h�u���*����6���-Xg/�;�N�����Q����sz��=|������kwr�Y�?0�?�Pj�����z�+"#��!����~�kbc?��$���'�Az������t���4�\�������q�Dx�:N6�ogc8C��}��<;t2���I�YA���������c��V��P��x���N�H�&����P�{M(�G}�P������p�M���{���vE�f������uZ��}���C}���d(uh��^Qb�>���0�������W�{��/%���+[?*�B�J�#Ye��v�����Ob�&��>�_@{��I���������"/�<4�kI)�`=������#����~��Hi?�5���MHi?���~��4
�g����"L�A�>��(��^��z^\_��uE}�E�|��T_b��;���Tx���n&�gl�x�C�����{��=�!��L�Ow������v�I���F;����m�]/�u����h���6���-�]/�;F;���/����	 ���e����+.���x�oT�p�I��A�E�*��Sg�C?|�����_���l_/>I=�8�v�������z��O���G��}���w��6}�~Q�,�#�D���*��_.��>*\B}��p	���p	�y?���{����5z�.����y���~y�\����~���Sm<���>�������+|u�D���z@_X�����j�E�V#��"3��������m&�7���C���x>�����z0��|���f�x?g��b���3����2a/��z���k��;�^(�
C!{),���=���@�=���*
y���b�O��������e��_c&�G3�����^a%}�������pB�9o�Bk�`���/����������I�w�����wuFD�=��5#��)�&#���.#���*#�E}����-2�^�w���9a��������O?�����d+��W�m����P�=@������T(���{����!�����������P_�������}�����{�t�������s�����o�
��>���L���V~���~����1/UC�����������hg�p�o;(zD�[i�u���im����c%����9�7����##���]�g�z�z��=�p_�/].���^�^�$z��^�I\1sL&yfO�	�z����yB�W`���"�w�d���)(R��}��\a�X����+0{��&�R��E�Ca/��^O`P�����!0���_4n�I_��g��qa�n���+����Y�~����o;���(Y�����@1���D��W���A�W3������kR&���;`^���Gi�0�w�d��}{L�����+������&�j�~M��9������/20SQ�q���#���s�9?���!���	xe�s�^��v�������A?���<����3����4�Om�_J0�6nag�o��7��������"������q�W+�����u��=I����'C{f<�}�1����}/����D��z��fXU�~�	,��G�������������G��W��^L����s�o�(���'gf�w�d����NY��1���=����TN{�+j�8�J��54��TR�zP�����y9���b��?D�����X��(����z	ix��3G�|<-�~T��S7�yk�\�b��d��l1BI�G�7i1���������~�7i1����~��k1B)�g�7n1�s�g�������/n���uE������P_�$A�_�������W\;����Ym��?�!��d]�������}��C��<@<���k��yq���G�~:v���j��+A�����O^�_Ur��
��U|������0(��l���W��[����g
���A�����~�2�w�;z����'GS��S�������w���?(��+�w ��W|_n�!���zvaG�&��������MB�UB�{MB���&	A?�$����`(�3����_�I0��3��'@�+@�>����u����������bWd����F���y������F��Q���
�����;Q�06y�����I�������zi��+�wP��WV�@���,��`N_]��(���rN����m8g/�u���*����6���-8g/�;r������_�9{%��s>�!�F�����/���X�p�~T���lcL�-j����w������Hm�_/E��B�T�~12�>�	���������TC������	�/��z��j>>������!���}������7�*�y���������C~�_A8�K���P��������A�T���?^,��R�������Q�f�>p����E��}�I`����;�p��9������?��C�����o/�'�����������'�'������������������/��m�x����Y�>A��C���s��,_�L�����(�������0�C��@�y�����~]R}Mw��6w�h�c�����	���w���nu]�x=����h�������D�-�����������G����.�/��W������r{�`���|���`�}�p�/����9���w!��:�^���Y�
���S��}��B��zU��:#�����o��7|�>�a�X��M���x�F>|����>��/_OF���=���@�p{`f�%��>MZ�1��$�����z..A��r!�G��V�q����Umc{�x��6�#=�WU��a��j���IU��
�����W��q�g��W���bz���Z<��}�/n��uEC8�}�?���x�a��5�����3���_' �w?� �5J}��<<]���K}�����}YF�/�]��PD��Wx�3�q�/�'�w%`3�g��RQ�/M���.S�l�/(��}X�`��K�U�Ga����rq��^��fT�r��+�r�B�e�$�c�y��_���=@�������$�/�%�1 ��W�~��c@��Ht�#���+�O������R�\e@�k��~���<�tw}�az��k�Ez�x�*Z/�uU�^���h��oSEc��������c��z��k�
y�Y��zM��%�@���������S^�_r��y��������#�%�@���s���U�+9uQUh��z������bV]�wV_��b]a�;�
�^��>z��?��z%�|�+���^@�
]��Ro�+���sQW��u����m
|��p�g�~�Z�/�����^�-�>��[�J~�[P�����[��2o�+�g�q�[�/�������tE�?��.s�����[�U��Wp�^��[�J~�[����d�AW$�v�"��UW�en�"/V�����7T��*���E���C��]����]�b����Or�2�@�+�B�AY�v����VY����\Y�{(����@z�����������W����z��J#o�,�Y�E?A0�+���CY�*��i������+��BY��}��+������{��	�A�*�}e1��)��Ze=A0��+�|eQo�,��U���UV_�s
���y�����	F���	��XY,��"�PY�O���C_�,�=��
���a�����z����V?�5�{;}}����Oa�Y��Ba��(�zK��?��.3
�x����P�
��������~��`���~�k�F?*��
#^�=�8�W(�~�1o�0��U�S����]C:z{j�W��~��t�����/m�C���0�-F��
{�t��P�
�_A:zw��?<���2���K�����~T�-F��
�L:p�
��Ea�[*��i�D+(��\a�
#^A:�^��t����^��JG?�5���zK��?��.��x����P����H��U���8�r����������'��?A:�+����(�yK��?�������+�B�Ca��
���WaO���J�y
��G}����IG��O��
c�Ca���CZ�=�-J /W�U����/'(������_&Do�����*��
cZ�=�5J`/W�5m�P�7T��8zA�����_��+H�����t\��I���8Z���q�}������*��R�|���i!��J�����WaO��
��t����>�`?<��z�z�t�V:.@���0������WaO��
#�Ea� x/�O:�E�t�V:.@^C:zQ_�Hz�i�D#)��\a�4�^@%�Ra�������D_�0�]�
�A������W�'I��JG?�5��{C�����D�L:H�
#�Ea�[*��yv�t��+v
��=�����~x��/������������T��*�	�A�Ba��(�yK��?���h$%�+��j_C:�Q_A:��~x��/���y���wQ��
�^�=A:�W(�}�����{�~x��/�OF��t�C^C:�Q��T��*��3��W(�~�1o�0��U����r�]�H��>�����~x��/�e�A�W:�!�wQ��
�^�]&�
�����W4�^CZ�=�HJ�/W�5��(���
������~��zpJ���!�wQ��
cZ�=�HJ�/W�U����/'������?A:z+ � P��T��*�	���Ba�{(������VaO4����vM#�������I��/����q�
�q�yK��?���'Ii��
��i$�������������I��Ba��������'��_&to��$�
{E#�0��U��������v��[����"GKg��iM�
Hn����m������z�r����c����n��3��"��*�Fd&2�P�)�)��;��@��,f#i�E�! X�Iv�H:�JG2&���������I�i��b6�Py�,�������tH�t`�IK>���I�k��\��F���Fl$�y���4��"�P,���N:,^���1IF�]>0����Hjy���4��"��,���NT��+2&�����f�����+[<0��k��4����I�j��b6�rQ[$,���N$x�C�^������}`�FRk�?����T��K~#i�u"��+B�$�E����O:��D��Z<0��k��4����I]����l$
��H:K>������W:0dL��Qm��m�#6�:����l$
��X�I��������C���#��<����t\'N�x`���b#i�m�#6�:����l$
�6O:����F���D�W:�IG�v����>0Il$u���1I���&��Hp�H:l��tp��"��`�'�u"��+H����H����2�����T���W��������������7�GVB�y*pquw~{s3<_������ww��l|�i�"�_������d_����?|�����>e�����n���������h�q4�����w��g�?e?�|����������T�����a����O�^�T6�y���y���j��.�l�����p��l�����/����_�� ����fjo�eMo�1V%e�NN�B���;~{P~�����o����N�y{�?=x��h������o��EI��3�v20n�q]G&��^|2�eE3j71LH�6������C��\pX��l���}�v������AYc��X�����bTLs�b�����*����FJmzT1l���u��vg��|��^�S��k�i��B�S�
�5Z#��B�kc����B��B!V�
��< �<<��;����;}�]���{_=y�P,�O^�d������s���7��v�Zf�r���^���:�;��EO��[d�������<j���M���{��	����u�"����0�������	��������=O�����w��Gv�.{���'�~����'��d�O>c�7?9<z���g���G��?��0�|����?��7������v���
�}�����������l��'�����[�H��C�ww���7�����?9~u��������?��(������WO�N�D��*�����/7��s�w�E��1{^&�����E������]�@�M���[���a��/���jL6��~^����b5���/�nGW�*�[��/�BT>���n8��p�y0��?���������
�%�/���u{3,24��;
a��$
��?_��4D��HNG?�^6E������?�������`|vq��j|�m���������Z����N�
��"������h�h��h��#�g�*���=�+�����#5cWI�����^6u����A�1�{S%��g;H*��u�J
�E'�$��$1l� �Q;$1|�$�o�TR���������QI�@�����1������zG��)��@�������?4�y�����~�������Q��a��To)��n���u�P��������u���n����5��*�]��P�@U��P�x��X6b����,�=�;_�gI�D�8����g����`Ed��������%�)�Z�fNU;�������j���v3/C�]L�v��2g�M��&Z������H	���L���3"p`�0M���k?�L�=&�{�l����"�30h����r�fh.Q}�t�����[UG?x{zx������w�{�����]
����9tB�qby�A�� kg]C��O|f�]��~�08��7YD�.�-��{xS<��lg���.���4g�S��E��~�{�-��B!�����_>\_�?�~sp�}N�%Z$]��KE����mt��%�8,�q�����g�_GQ�g��c��Zi��y�Vx��DY+��f�6f��Q;�+c�&����������,!���$^4��kM�c�d:�|��b��#��^kjg��������S��98:x�����H�eM���=v.l4�K�^���OG��A�p������3�S����?������n/���e��\������NWv�y*����x��v���h��2����#^�?D�o�<�����n����p����m��UKV9�;k��d�&3���\�6n�+������	Z���g����<[(�
�YyhiV�?��/���Bf�C������C����t�q���E�U���M�_��f�����A�(k{	0�DB���
��|������E�M�G��������s#�|�.h4it#�|���E�G,{st�	+���w&��������<��`�
c@b,TL��)�����9�=v_h�t�}�����w_h�%�}�s2���k��*G�F#�:W�����NV��1>j5
���Fa�NV�0|�Q��	�1�l��H��9������Sj�(�TD�r*�U}��P8��>@m(�"v�`P���xU8��*l������C�oU��h�+#�����NV ��K<�-���L�-�oN�-�Or�b��0�9�����9��H����zs��1c��|'`_E��j�pP'� ���v���n$M�Rc�����LN"�����o���;�2~G��`h�W���!�����i4�����n�,�T�E���k-��7�04���6������$T���1��LC���*u���1p��2[4�� �/i��j�~�
b��m�S�u��Za���d!�
[���Qc�q!h'����E\�������d�K��)���E`+9�l�b��"�0hB��,�0���3Q��
��3��D����
��{�9J����1�l���<7�r��>k+e7��4k]5�Z����:�1
���5*-@��������j�����qiAy/<����2	����=����K�)`��y��
1��N��A�&t���G��x��������A�M��r wWo�W�eaX�����x����{p��<J��y2Q��n���1'���cGw'^���Lg^t2�c��������!�w0�c��y��v0�����M����|�:�/O��Lh�4`q�?�2�!-h�@g��� ��R~'�1�,�s�~~���g��'3����${:�~�.;z����c��/��*�8���u���;w�����;�W��5����v�Q�U�q#��C��4�v�e �.���c�Q=u>��c���2M�=����D*�x��D f�D�zE!���L����l������:u��i\U�e9��}H�R=���u�����c����\�?���!z�+%Y���G�?x���~�����pBT;�ARP��j��� ��>)�v��������K�����H
�<)�|��Q��](<7�3��C�
���� h�a��8l���D���2�z���O"�^�KE
���t�-k���=*��E�G�+)�w����EWs'��z���P�m����\5z�6���K��U��sK������<b����m1h���������]D���in64h����Z�l������sE��]��il_S`h1�������Q1���bs�`�>^g�L�f^t�`��4�����D�w�	`�S������r��E��oM�����=�� 	��0h��c���"�����{��K���������>8=���?��fxs�xr��69�[\�����?����?���uN��:F��D���A�T/~us�7�x�3�'��]�"�?@�������/=��L��{�GlK�������hr�����'��_v����+t]�-����|U'�
^Z[��k;�c����5��v���-�K\����Q�Z�z�bE����u���k��:�(�9�E	�U�H@��EYX�������h$�.���[���`s,5)�#@Q�r"5)'�Ux��K��	9������������2�e�����������5%��������Tr���c:I�8�_��e�X�1��	����eS:D�'�Q�����d���G�`�Q��8�X������~���}��EL�x�@#&jho�`5T�Q���w4b�&�`�T(jRLG��&�~��K��l(lR���P�3j����KxI�������I/aF7���7���{spb�o��T����|s��y���D#.71��_�M��kH���(Yh��n�����f%���,���{���e�����Fu��I��o1uB�����:)�{#�NM�D��	mj"b��6�	��6"�aP���5)�aPn!e�����e�[c�)	�s��M����16��K�����k}g��C��+�V!<�1�����}��U��b�7��$��/k�=g���������awT"��=��"m%o��]qW�f��@=�)��S3��f:bjfQ��#�f	����9�"NGL��E����C9�n��!r�����M�D����n��!lR�P��=q+�=�`�~��~�?�9�}6�v���O4��"�g>�<x7��{�o�<����S}����Lgj��@U���&s��{�0Z��r��0C����oZ�$Z�XqoS��,�����L�6��*W��������S�������>����m�@�G���N�n���D�es7�k�/eMAU(J�k�/mj"@Q�\�|icP�f�.����#@�$������=
n�/m����!�}&y������f�Tf�.������l���0`��mS�@�u���)m��L� iS�������������m�Q�c�
�y�.�%��|��7���Z��GJ[,�
nTo"^�.��xA���9:�$s�������sa�b�*8<�R�,�^ =�<`��<@C���l;����s���=�/�R����:q���Yg�7�X@'We�S��)�����X���aU���;.�CC�w�=�6;x��ae�����f:��I��hee��H�*�Ym2^�����-�5,��y����o%�����.�>�b�l����h��������$�R�2�����m:#��"G��P���4"�h���������Z����>@y(Zv�����9T5�`,�xz��b��^\�=!{ur����K�p1*2h5M��|��v�|
�9c��.!z�BS�b�d
���Z�������5��FIl������L��=����P�ul%8!^g%8�8���Jp>�����Q;���;���{��6]�S���#
�	%(�x�����l�Fd�J!P���4b~�45}��>@]�<b~��X�Y�=�"������f���R����o��:G����I�T����1�~���I��~�4�D��LY�@���j�����H�+�@�k,�tM%��.H3����<kUY-.M����Fw(J���I���Z���l	n>9���3�V���j��y>�Nf�b������i���N��=j&Qc&���y,D�`����j���{��mu�����r����8���������6�3B�nz#���;B����N�l|��x�]�/:�>�k ����P;��������&g�;�_p�����I��g��=+����W�@�k�)�u���~�+���L+�fO�5�1��8Jer�<�u��h4w�s���9{C/=j^��A*b��*�<�w1��-��a����oO+O��d�FO�
<0�zC��yk�6����c���M������G�V�4��QC���G�+G#����FK����h��$m��Qi;��I�1j'i;�� m��=J�v�f�>�&�������>@y��P���T�j"@9�}��@yL�B�(��}�@*���K@E�2��P���U���z�3F#P�h���3AE��� �l���j�g ��l�D�i�����Bgf*o4������|�+���s�������������'{J��FU��
Ch�E�~/f	�6��%����3p�H�.����p����'�Q�w�'�!���������l]>����#Gwe�6�]�JZ���v����z�:Z��2Z��U�z�"Z��'J+9�������s���_	�`�0y�"�L����D��{�4=`���+���1]��<����0������!L�����S���{�4=`�L0]���O�0Y�1���������W� T�B�#@�u������x�V]L�:��!��_�|y9��F��0�9o�-����w���cu�x�L������
J���k��x���d\��{h���`w/��RYD�����:;�f���tvh_$����rK;;������M�]5�:W6���Nt��	��V6���1+����M����&��oe�Um�u_Mp�=_h����?�0����#0���L�xL���~��`�0M&�6H6s�@�x�&k�>��xD2A�9J����o���
����.�-�9�?��e��S�b3"�B���0�,a=�v'~N���������T1���]5�UYt����
����tG�h��}���v�Q��sk���M�d�r�c7h���|�m�@�A��NnA����L������x����s��F�x���[��m�0��T�
t���� Ckx����6<9��X�J,�����a�}�F,i8P�I��
bP�h���cT��>�8U}��>@M�
���S����	�z�����D��P���T��"@Q��u�&eW<�G�!������r�fp�a��_9^����d�c?-����������o���9*~�O&�by���'~1����S����^O��_ae��,m��x2������y��M�FE�3��!��LM�(�j�3��s:��Z�����
��y�&FM`yH�L{�%u������X-�!���(	�,������j����r�	�����-�4^?J�><=x����y0����������v����x��\I��i���l��w�W�z==���������Og�?�G/cW)�w��������)^�evz\�,�Sy���
k�J<���N�Q����������x�x'W��P�����+��� o�~j��y����B��&���S�h�%I
��~$h�[��d���6�O������G��=��w�O���p�&�L��%�w���7Wi���D�J;�����w���
���7Z����6`�~t<<99>�V�-������W7���qvy����eO��z����|:=�����7�wD.F� lq�����������x4�U=������t��%��c��}�r�����d�S�?�9�S��Y���[���U��-�4Qy4����o8���TLC�$_KU9�����'�������w�0U�X'����������]��~�08�;��4Tb�h�����}�*��� ��/�8y=Ua�_�X��2x������]H����i#�E��������������N�b����6��!F���!����!�q��s`�E��t�\H�E��;,�MfF��5#b����6>	��8
%8kD�A�>	�9P��KV�7����1.��g��50��z�����y���
^��*r�E7+��X!l��
Q�X!|+�q`�G����Ag"�=��b����s�$D��@Y��V��O@7�%0V�nmL�"��jd(-0�m���F������1��b�_(��J.!&��X<��?;�v�+_D�������KxT/x�uw���gI�}�V�=�{���eL�<�������f�k��I0���L��e
6*S���d
��L���)@�5�_���j[�l�
G�f�P��wqz������6�����{lf_d�����,��
ZN�� C*�
.��y��q���elTt�p���&�k�
"�xe��E�M�����a���inXr�������)	�E���?�yGB���m��JQ^�a��S������LD5��l.IK�H�
Ry}��l������=,���[�y(�wV�=W:�������LP������+wO;��&g�af����{�KI@p�C�t���{E���|�������}0�����g�����F��6o�5���^BT����Y9+1�s���_��������
�Y}
�f�Vp���v���6�il�dp�<X�]�k�����H���o2���TqU�||Z�N����O��L�F����-:oc�#&^J"���qt���W�O�@](Z����Z��P���R�*#@�H�>@u(�,H��m<��X���X��N�@i<spdU,�p�X�y���H���1��F;M��
�/:�_/��h^�6�Da����������4J�\�#���B�"�(
_TG�Q%g:�4"��(�����<J��O�>@#�(�2>��5e|�y��Z�Q�D�Q&;,��,�����\��cV���Qm����Q������$s?����u;X/K3(k42��V�\1��Hb^P<W
�3��O�xc�%�dP�jz|���4;h���6�@3	���Z�������i�:�;bza����$0�j�
c,3�	8�(b,�~,x,�����X�f���N������
�%��n�����._�XM=Zl��N���L���c��e�=��U-�V��j>�A,�|,�X1���C�	.���#N���ec�6f�E���v�n�&Y��I�����M���}�t��v�[�l<�|��_���a���?�������������N��T�������>�l����e��e�,�|c�������3��d���\��\�Q�������s�������C����j�{|`1�Z;���y7F�]SR��9��0EsLMaJ�	Wr*�p,a����G�yWi�x��zx{uQt������������Og�^5w��
15\��k���}E�����^v���7j8��y������50|��6�F
������50x���9���L�&P�mQ��%m|Qe+����;_\������K�(���,���������'%Nz"�$��.�R"o?�����ajed�ZR�����Z/V)�o��|x�C(�s���R><�1^�X�p���n�J7VB����v3VB�.�J��X�pk�����J�Z�X�p����X��b����g�M�J��Xi��G�'�V�Q�w>�������|���T�X*��wB4�c���HmB�Xe�3�E���3�u���9�E'c$��#1l��Q;#1|c$��kplnd�zYC^���+����T=��0m<��"F�jB#��x��E/�d��ht�fz!?7��,^��'	��gEW>P�����F/�-!|�h	a�FK���hi4��b���=����=����p��WbcFE�h�Qc���5�QQ��O���:�k��!���7����o�M���Y����|S-we���?9zW�������WO����'O~������|s������d��}��7GE���jA�~?�W��������O|��C��b�k����e����w�W�^����eo��{��a������\W���=���a}J��E{�p�{�Nk/�e"N��\�v_����%*��x�qL����+s��2
����C`y�fp����NVE�������x��������I7,�j��W�o������������>VZ�p/b�eg������x�}-Vg<�����6-^�B��d���&�A��l��v2���L 0x������F7����d�'��N<�$�e��Y����F�������qv���x���N�t?����������L�:>:=��i)�_>�dO������^}�����cV}Q�{
`�D�!x
�M����G��V�n5U���o�Oj�6��Ay:�l���	k��7������O���G�����7���Z}}=��}�h#��y�q���m�[�+�*wN�y}�/����`R|���0cu�Y�#�����8s)ZOY��8�����v��&5&��rv��rRH��������I���Y�5.���w�Jk�����3u�(Ba'E(�k�h�W�����k��^t�?B����F��N�n�G�E��{��6�j����Q�1Y'8Sp�Y�	
t��aN"r<�k��`e�����E|S�����8�O�e�Y�������L�Y����e������u7xE����F+{��/�mx���b��Jc��{��z� �39��!��s{��d�r��GH�_�[��s�eN�d�C�����?���+o�w����� �SO@+�+����c:VG#/�q�?��\6{Z����K�?xU9�9��%�,�%hv�����Usr���7��3;=nC�n�T`���d�a`��v}G������ey=k�-����M+�-V'����4#�\D�+��}ZS�u�+����S/:�Wb��y%���Wb�N����y%�o^i,����{��Rr&#�o����0[��G������������������P|y�j�Z9�����)����d���?�w���W�O�7/��>����E��U�������u^���q��^�y�@e:/l��}��`��4X6���vY����u1�y��������Y�1��K�����n�,���d�,va��oo���L�6�������8��k ����8m���d�>*���1'F�$���d�������t	.�`o#�L�[�x_�����W@Mh_��p�P��M^��Z�D��0k|I_yv��7n��9q��*����=
����?���E�{�����E�u�����f_n���e��]��{���X�0��.�����~-���������7S��5��^qP���}�x}{������������e�%�Y�v�N��~�o|�(&#g~��TX���w``8����??_�n�]����;����rw�����w��wm�N]�n?�����p�/o�,����bK��PA�j�J���?^>�R�?�p}u�|�gS�������Wp2��Zx����]�������1���R�y�[e�~������,B��A��W��p��`)�z������������e�=\A=\����5]=\Q������u��>\����6d��{�?M���Tb����!_wd�O��&����z�����~c{���|�x��������J����>�����c�u��[��NT�E�������I�3������3��8��Y�w�q����;��L��IU�n��p,��/k9�g����U�����bfN+���M<��0���g�������~���~Q�o�y��qJ�*{y��^�����a�U�;N-W�������H��<#`�������������O��'o��zn��Y�]���l�������{[~��<�_-�.^�d���RY��+T���
�w��y�j�����j.6��b7��^Y�z�^cx�[==PlQv�J2���e��k���6�%I�����t�[ng���\��?�}�.�}�����g�����{�����g������^�p}*�iv
������B�/���P|I�/���p��S}��������O�;v�������Q<�������n�m��<[F�,xK<����y�[���x�<�%�e��'��J��	���V�j�g���*?��g��<���^�Y/���k��n%��8hx6K<����yv�6���q�.�l�x��������grt<�%�]�a�<o�8��q��<��<O�)��y��q�Q� ��A������
�8�q0�5528d���f�y�o!�<��A��L���3�J��q��� �����,��gr��A��k�Yo%��8��L�����J��q��)?��g��� #�A8d���v��rd�8����z��A�S���8��|p�:%_;�������o�_�AVYo�o�S�E��p����|/������]�}ong0����_��+|��������s~~�3�������W/��2��6��UN
.|�q�8�q��@�^8�������c����������~��p�~����P���~����u?�fco�ob��U��f{������sqq1i������	�UO�=�.��#���`��.�)���/����g�O�����$�S����������*	�h�/c��*�U[<���p� ��i�����u&�L���2�9����<G���������M4/G�zTlU7�m��������35��<��W^�<�x���f��h{
����.7�J���/F8o�����?������v�}�7�/���o���e�0�)��EP ��@RpL��@�C�&��@�Xh������(��A�K�E�����(��jL���@�C)�S`"(P�P@����
��CE����
��CM�!���F'��]�bW8���
D:H��)��t(�S "(0�P`)
$�@FP�����j�0�=&94�jL��� 94�L�2�7��[��rh16���:L����%C�%���;���3`Y:p��)`�t(�S�#(HG
��(�AA:jh-E���
�QCG�����!�{
����B����
D:�rh0&���ZL�����C!�b�C\.D�S0H��[S r��l�A�H�IQ 0tr�����!vESHL�����C��
S #(p�P�H9���������C�)��t( ��b
L��!#��a
l��!#�P�+��
hv�j]�9����=��C�(��A�H�IQ 0<����@b
D&
h5��
��CA������t�P�rh0tr(����!v��C�)0�t( ��a
l&
9T�2�"p�0@�GQ�R�����@J�NQ�1,��t��:�����GP��RR�������v����R
�@FP���@�r�1�=k?�p��C�)��t( ��b
L*
9��2`#HG
��(�L��� 5����})�tr�����!v�SL��@�C��(��A�J�R
�@DP`����C�)��d(0�L�jO�IG
)�S@'��]�bWH9t�A�J�B����L:X��)p�d(����})0y{
� ��)
��EP��RR�`
���B�kWC�
��
S "(0�P@����
\28R
�@��`�R�+�ZL���@�C)�S`"(HG�)v�@l��!u��3L�:hw�j�\�����!6oM�\���
�(�A�H�IQ 1<���*L�����C)�S #(HG)�S��S���CF����
��CF�����!���u�a�B�>�!6�����c
\.
�)n_@
\�����H	��)
$��EP ��@R(L�� 9��jL��� 9��L�r��]�+��C�)P�)X����+�:L���@�Cu�a�4�3�t(�$�6���t8�$�8�s���W0���aV2��-0+W������9���MvE�DHLu����t�P$��'�r��4+H(�7/��)��quC����E���"c�F!].`�=��VU���c�F�..��,��c����QOCT*�T�5.��k�m����`��G3������d����0],�.��t��]8��G����.M
�t��t�go��,�%1]���@�I+=v�Tz��Rt�����z���t�����z��2t�����Ro1]�=]�m']��;L�#���n�R��&�,�r_���t����,��1],�.��t��y��G����.K�4��.��.���.����t���S�-)���"��N����L���}�6i��n�Ro1]&�.��t9R�����������f$ [.�-��l������t����,��1],���TzG���.2�p/�KY���*'k�IL�hM���v�E*��t���v�EJ��t���v�EJ��t���v�EJ��t���S�]��E&��A�IK=v��B�![.�-��l���K�t����,n�1],�.��t�u��G����.N���.��.��J�I�W�.:���n�R��&�^c�T]j;�"��`�t]f;�"��b�L]n+���;L�mO�`�IY=�A�\[���t�1��y]���tQ2����z�&���m����t���V�%����%��%�v�E*��t���v�EJ��t���v�EJ��t���S�%)��E'��MZ���t%7����Rl;�"��1L���Kl']d�8�Ry]j;�"kk
L���l']dN���tm��kR��K��Ko��kR�5��N�����c�I�7�.A��N�H���.A��N�H�w�.A��J��
|
�e����A���d�>��rtm�����8�K�I��v�Vz�6YU`�X]f;�"��JL���m%]�VzL�hOW�en�R�1]2�.��t�Ro0]*����zKJ��t�I��v��z�6)��e"�r[I]9�B�l{��>'p�,��0].�.��t�9���t�����e+0]tRo�����M����.A��F�tN*��t��t���b�I���.A��N�H�7�.A��N�H���.A��N�����-:�w�l�J��^2L�mOW��d��dqL��rt����,�) ]6��Km']d
b��bt����Tz���tm��sR�5��L�e��,��I�7�.A��N�H���.A��N�H�w�.A��N�����e"�r[�]��a�l{��v*=]��c���^2h7i��n������t����,-1],�.��t�J�0]<�.��tIR�5�K��+����I�7�.A�vJ�$��b����C�IK=v��z���t������
�9A��J����,T����|%}T6�6��c�\_�}������@��^���V����X+��S&
�m����4f���
X��������q�s�����R���2P�{��������?����]v{y������������j^���Rqt�S�������{���l"�c6mu����������k�r��P�2��+����J����Ti��K�'��O^�d���h�^��({rq^�}w�r�����������p�w��ob��Fwi`��a��O������_r�����'��_>�����O�#>�>�>�����e�&��.�3b���#����wq�;<�b������dFb��'j���V�Q=��������-Q���B�N�c��B��zN��|�y��q�V���
��R7����E�|�`����7��_������'��+��Wo�d\d����������<����:�UQ��%D�&����k:|<����>)�����-�C��p�]�_-���������1;������+vc�`�E���N��yz�����L���� =��Gbzd����!�YazT����!�Ycztz�:���4c�Hi6��)=.O�GJ����n��	�#������eG&���f���ezX�-=:Az�x0=�[zTf�(z��wI���Sf�3��������Tf��i�4�����DJ����n��	�CJ����n��	�CJ����N�ayz�0R���vKO�AV����n�IP����a��wKO���E���J�-tes��}r=��;������E����n��	�C+3�GvK�L�R�5�GuK�N�R�
�GwKO���Ii���)="Ai�4;LO���AW6'��'A{���n��	��(v�G������P�pL��� =��G`zx���<=z$�����-=	*�$�Yazd��$(���f��i�4���1i�DJ����n��	�CJ����N�Qyz�(FUf��-;<Av���uK�L�E��!=2�����:F+��]��uKO���E����N��	*�&�Yaz�$��AW6'��'R�5�GvK�L�R�
�GuK�N�R�-�GwK�M�R���tJ�����0��:d�v�N��l����n�IP��s�j���ezT���CW6���'C�#0=�[zl��8Rz0=�Sz6w0���
�#���'H)��#��G&H)�����'Ai��4[L���������0=��ft+���A��O��������dGP�0L���� =���Czt�-=:AzE����n��	��(z$��wI���Sf����0=�[zx�����1=�[zd����l0=��f	]��4c�Hi���-=6AzHiv��)=�;��Q�wY������GP�pL���� =��G@zL�-=	J3u��KL�����:	h���U������;	���f��������f�����#���f��Q������f�����c�����!;�SvD��L���c��'Ae��}��q����2S'}����VI���lN��O��GbzX����!�Yazx��l�$`�'R�5�GtKO�R�
�GvK�L�R�-�GuKO��,Iiv��-=	J3u��k�N��]Ae7x��(v��vKO�A��1=�[zd��(��qy����1=�����&H��
��;�G'(���f������4kR�
�GvKO���Ii���VI���lN��O�4;L���� =�,
�w8�)=<	�}b$=�n������G����e�u��l_j��Um���O�.z���^-6���`�!� I�.r�Ib�e���C�Q�{�����=j������+����u���������gZ�����|��NO���9�������q���z��C�5*����+��E����g��37����g��Qy�z�����N�cVU�x�^�C7�7��Q�,���a����8+.��S�#�P��4ei���2�m�i)�b�m�Ti����8H����M���d��6A���t9�;��t9K+HRxt� EZA��c���
��&H�V����6A�%<,'��5����;��lq��nDw�;��C�&H�V���&�&H�V���	�&H�T����"H��
��&������c����pRxl� �n���-bLKwDN���4u#����dy�6A����d��6A����d��6A���$uG�	�%�$�G�R�%<��&���G��c����HRx�B��v��5�Z����Q�dQ�A*�V��,��&H�V��,p�&H�V����m�LKw�;�M�i	�&�G�R�%<���tu#���!���	R�$)<�M�&� -Y��E�.�MN�l�ai���m�LKw�$k��	2-������^���i��!uG�	2-����h��!�&�'�)<�M�"� I��m�TiI
�k�I+HK�A�[��
�����x��%<��eU���5-��v-��6���i	�kY�w�6FMKx)<� �F�����T�<J����*L�X����Va����d��6Q����d1�Va&�@��e]^��61��0��B����)��Y��X�a7�@HR��4��IJ�n�K+LNJ�i�F�2��!%��
S$&)A�U��I���q6GML�����P�9lb
Dmj6����aS j[�)/���nD��H�$R�0Uba�
�Z�i�� �*L�V��� �&���o�CJ�mfb$I	r��LL��M���W�9jb
D�r�����aS j��-o9��nD��p�4N�0EbaJ��M�0Uba�
�Z�i�� �*L�V��� �&L��iR�l�0� j����m�LL��}�n��0�������l�K+Lj���m������?����*L�X���Va���$%H�
31	2��Va&&A�� �&L��YR�\�0� j+4���4�����o�g��4��I_��[���_�uyZ�9���hC����I��������huk�nTF�47`p	��F�E��El�B�%��+���M����y�l�7��������k���e-7��w�!�fY���BP]������?d��>>y}x�}���*���WQ��G�Uv�?z�����������p�w��o��F����5���|3���g'����SY
u����%'��_z����e�&��.F:b���#���8�����3�wOO��g��sfN�o�35���&�2/����XsvZ%1��<���j%�H!:�`_eJ��7� ��)��MG!
O�����`n�b,�S�FT�����Z���L`Xj�ak'��m�,Ke�M��
F��j:�B����UHv�!9�RN��x���8#���
i���IuP8$�a{���mRt���C"���
�n0$Rl��D���)�UH|�!	�fD��6�B��iZ��Au��C�*�
��pd��6!�
��dd������C�6��B����*$���Hy0�B����&$�o.$E��k��A	���6mP�"K��
i���Y��UHT��25mB�T���4�B��:hRIc���!`���*$���Hy0�B����&$�o.$���#m"��H��
i��`Y1�UHTc��(�B��:GVCi���:XRT��6(���C2�wy�&���
Io0$Rl���C"���	�����a-AI6�o0"A�#i����6-Q�5v���Yk�UHT����"$�oNxN��j�`H�<h�����C�6)�UHz�!��`[�d7)�MH,�\H�������o0"A�/i����0�P�2v��@m�T�P�Au�6L*T�,�7��T�*�
�'�A����]�Iy0�B����*$���HypmB��d�6#f�mB�I��1Z��Ay�vLjT����<P;&5*�E�nP����
��
��$�A�
i�� Iy00$�c���!d���*$���Hyp�B��������
�!��,4�*$���YT�UHTj��AU��
��c���Y��A������7(���*�
��&���
i���Iy�8$�a{���mR\���Cr�]�-"�}�d�6#4�
�o0$Ach��`H�,��*�
��c��JS��������d7(���*�
��%���
i��`Iy�8$�a{���mR\���Cr���-"�}�d�6#�7�
�o0$A�jh��`H����*�
��c��
M������Z�$�����Iy��B����*$���Hy�8$�a{���mC�4�&$���]$�ML�o���o���b�����
����
��Ka�KC�j�1�u/d���c�k\�V1�F����x�/�o�U8����kt)��rrv3�?���b�\�_�}4{Y��R����Cfk\�>�:Q�>-��Zrs�����O�$>x��������3�sU1x1o9��77C�oo���|x����E�������^�����9���/�nGW�*���
G_�ru}q9}�.������k����������hxw�������G���
x!*[�iv7��8�<�������so���cCx	��W�����"�s�!���������������7��~\��gtx
�
��s<�]^��gW��w��hn��i�MW�sVu�y������,�T�:������>@E(C����P�@u�D��P���2?�D
aU�b5�'�:�IWw�C�?���Z9D���y�����#b�A�RG�e0ud��A�9zw������q�L��R����o���v���_��sy{���P�Su=���x���t~=��{���*�Z�Sx����F��YjR���Y�v�~�?����?�����=c��3�s���=�?����3�s���=�{����G�SX����\�O���~�������O�����a�&�����6����56 b��
�OD��=-�RWd��%����@�t�������)�z�<�����A��Mr�	e5J�\1mD�6R�H��
$�Q��
fUb�'�D���l2���5K����}����o?��Z��@��`�����Vt�j����y�����-��{�G��
��l��*�H�+��O�Bfk,Vv��C��LX^��5
Pv�p
���T?d����|��������4�>� O�1�-d�z3��=�!X�lTi��h>��bj������}_-�E�����+�T�M:���~{�������vg+
��������(;x�={���o�{s���b"zt���e�1�����v��?���xi��r�s�����^����eo��{�� ��#A��`�GX����I6fgFgW7w��8�������
�}g,�<>zP����w���?y]:|���~r�������lU;�p�����5�}�+��3����������_��b���Ww��Z��<���:Vz�#A����T<�y��������}�
���m�����^�X�rgQY�v��s�����'}��������������$A�=���YF�(�%=*#'z��Q���Q���O�)����hnA��)
����������3�K�3��:���-�V��6+�s���Q�9D��[e�-EMX�/��\���3����6�m(H1��/zY�0���T�%8^���4b���D}���:�3����1�����(a8y6"AJB�J��z4%j�(�d��hJ��Qb�������������Q�g�6Ey
�K����Dl%���hJ��QB�����l%������.F\N2K��2uu��q_��O�%����qe��8J��QB��+���Qb��R]U4%n�(�9)�:���-���WM��2JHy��(�D�����WM��2J,Y�=��]���b��R���Q�)Jx4%b�(�%�%�HR]�jRJ�)1[F	��*��]�pR^u,%�m%���hJ��QB���G��&��������hJ��Qb�������bD�#,����NQ��)[F��(��l��
MQ"�Q���$�5�/��*��]�HR^u,%�m%���hJ��QB����Dm%���hJ�L^%!��,[��M$)��_�S��XJ�2J8E���Dl%��DFS���MQ��)1[F	)�:��-�WM���G��&��������hJ��QB����Dm%�����p#f��#,��]�P�TY�8��e�J��Re���8l"Iu
�+)Jd4%j�(!�UESb��R^u4%n�(����XJ�<����WM����%�������I��
�K��.�_�1b��K1��)q�E	ubK����(I��V�_NQ"�)[F��(���l��:R]U=J6����%�UGS������j")QI��
�K����Dl%���hJ��QB��)���`�cI�k�_K1��)q�E	ub�����(I��V�_NQ"�)[F��(����-��TWU��M$)�Iy���������jb)I��V�_R^m4%b�(!����XF��1�)FX4%[����-�H�Q"��$��K����"��$Ol���%2��e�����)Q[F	)�:��e���j�Q���$��+Iy���$yb+�/)�.��e����U,#j��#,��e�X�M���+u`���z�hl"Iu
��)Jd4%b�(��5��e�����)1[F	)�&��]�hR^m,%z��U����Qb��$�5�/u�k�RA!j�(�$%,��e�X��I���!X�o���T/!2�j�m�9	�+HNd4'�Y�F���jZ(:��#�=
�=V��>x{zx�M�'.��������w�����i|���_�8�^����f������,G��n��++
�z��j����������4�����T/T�(���o.�����Y��iy��e�Y����0,N}���x7/�|��:�7���������a�������]��������7w����f�=����'�67�(���mM�����y�e���pp���Ww�����}��
����G����Z��W���E��k����z�;=�sA���������|�=��ips�"��`p3i.���X��������P|PS������o��}��4{}�����9-���tq������Z��46��4�����6�'us.&����n���g�>}aC����h�]�e7����v��������������`|������w�|��<<��,�$���amT�{Nx��g:�+�w���w�{&RzZ����C���+|�
��B������O��
fM�91�,.�Y���t����F�?e�����eg�&��y�}�����K���\�is���������[��
�=�]6o����������LL�����*�����]��9-7�{L�n�d'P�g�p� $:��u�*k�N� '�j0��E+�/��|�:\�&kA��q��������?
\�� ����IL����;>��=>�������>U ���d��|����:!jw�����
�)�MC�y��%J��m��} k�/�_�
�in`
�j�z�A�k��/���xt��������\���3o/���<���u�/=�|���?���~y{E=�������_���G�h�'�d��p�����
��h~1�f���i US�����|<.n���w��]�n?y�x�S9u��6�~eG�_>g~�M�vt1������Rh}���U�I�������l:x�P��8�ao��i�we:;>y}x�}�������vE-�TUE��3c_�O/7���5�����������^�����9���/�nGW�*���
G_�ru}q9}��"
>��_�5��p6~
����3�������P����EW�F�?�}�������so���cCx	��W��������M0�+
`�-d�n��u��;CCT
P�4�8��!��no���~�;�����.�>^���
��G�P�l�q��t�E���!P�(�T�*�����>@u�����e#��h8<�;�����HRd)��P*��l��>�s�v`�1���s��a���$h�r�>Q����U������Ms���2������p<���B��u��3��hxocG���3���L�U�	���������^U����e���e��[\'����\NZ(U"�Ns6�(%�������3*J%O JAE)c��	D��(U�(��_��| �G�Fi��T��7� ���F���T�L JB}�b��VA& >�PA��(��(y�(�-�&>�`c��	DI[�|�*J�@�����(uQ���c��	DI����R%�>�Te��H�q���K�jm���P�X��U�:� 
$���&�����Q�|�QjFE)b��	D)�(el�	��&�G�F���hR}t�(qI)���>!H�1�Q�|�QR}ll�<�(I�q�Q��$�G,V
j�N HC�b�L@|��)|�<2J���XFE)jE�����O�AE)c��	DI����R'%�>:6J�@�����(]��(�>66����#���
2�q�
����ck���|��Q��tT�".J�������R�F����L JR}tl�:�(I�1�Q��$������5lm���T%O JB}�bI�VA��TT�,6J�@�����Fi��QQ��(y�CmsVK�AZE���pZ|jE)����O�R}tl�:�(I�1�Q��$��FF��m�!H�q�Q��$�G/��hd�C�r�K.ZE���P���R�
���&>���rm��C>0*J%O JR|Tl�2�(I���Q��$���F���HR}ld�*���9����p�X��|d��Q��TT�<6J�@���R�Fi��QQ��(���9�)>*6��G���c�L@}4�>�V�[X��| ���Fi��T���9| ����� yA
*H�L JEE�c�L@|�m�v_�F���P��}��%<�h���9�)>*6J�@�����(eQ��cb��	DI�����&%�>.2J���P�����2��v9�}V+J�-�M|>(*J�N JCE)b��	D��(e\�v}��>���b��	DI����R&%�>&6J�@����:Q�[X���| ��EF��m�!�{}j��2�S�a��0ea*2Lf������Ye���mz=?^���W�g�����-�2��[t��ec���|[t�K��Yt�K����W9��*��ac��`�N��`���`������e��67Rt�Qt�+�@Y��P����T���m\t'�Qt'��P�n,�hDU���8�|[���C�I��-J#���z��R�*K��x����#?R���|�fd��-�L������~����^G���8�U�$��t2	����S0����I HK��������>�H���R��$�m�ZQJlA�+������Q��$�G��RakS������(��������(%K JR}ll�"�(I�q�Q���d-�� i��:���������r��Nd��%%'���F)�R�Eub�T	DI�����$%�>:6��G��cjEi����O�R}ll�"�(I�q�Q���d��<2J�@���7��|�&'��DFiP��z3�Q&�>F��fjE�����O�R}tl�&�(I�1�Q��GiI���QZ�@�����(EQJ�|Gd�	���d���(k�z3����D�����r��FF�XQr��Ll�"�(I�Q�Q��$�G�Fi��T��������Q�<���>AH�q�Q���d��� UAj��Nl�&�(-YU'6J��(YNV����% >��Uub�L@|)>�V�[X��| �G�Fi��T��|��T%g	DI����R$�$/��2���9��*�m�L@|�}�j��0��ck���Y-Wn�`	D���:�Q��$�G�F���T�I JR}Ll�	��$����R`kS���,R�H HI����R%�&el�&�(-Yo&6J��(�}�z��p�(U��H�Q�Q&�>�T]+J�-�M}>��cb�4	DI�����m>JM����R���da�� EAJ��Nl�	����,Wne�Cms6�U�a�
[X��`�m�f��p�(
K JZ|b�	DI����R%%�>&6J�@�����(PK�������>8Y& 2H�@�����J JMV����$�%���F�6%���.Wn�K@|)>*6������kEi����O�R}Ll�&�(I���Q�MG�sj��[�2�8Ho�%$'���R$�$���F��R��fb�4	Di�z3���������H�Q�Q�o�s�R}tl�"�(I�1�Q��$���Fi��Te�Cmsf�n�����>8]U'6L�@�����j}OtV7*7����Y�R��+7���RP��r3A�n��fPpAJ?���r3�+��r3��E'�f�J�{��r36��F�����������9���r3���r3b���(���*"@9�}��>@u������r3�������8�MU�����Qvfy1S��ds�u��?%�*�P
�,���)�d�fa!�|�Y�]�v���C|f9^kf%���,�*��A��,�v��A�.�0�cf
���,���a��,�Z��}�Fda�!P����y��b�0��P&��
��/#��E��b�?]��R*"���L`D����.�tR��w�O��X���������@��a�,�}�+R-,��������R��(���k��,H�c�	DI������>��U��Z��Q��$�G�F�6�"�GGF�XQ��cb�	DI���%^TkS�����ZQ��8�6�	�@V�r�A��I�b�Qj�@�dA*�H J� ���R%%YO�F���hR|T�(5��6��>R}td��%%�>&6J�@�����(UQ��xd�&� �zT,6����G�kE�^���O�����R$%Y
O�F����I JR}tl�n�Q:R}Ld�.�q���ZQ��nm���T�J J� ���$$Y���F�6%��Q��(Y����G%b�	DIV����t����'�->�Q��$�G�F�6%#��DF�XQ��cc�	DI�������u�O�� ���$$Y���F�6%]��GF�YQ���Dl�"�(�jx26������b�L@}8�>�V�[X��`�>&2J���T�H J� �R%$]
/6J�@�t5��(���%��&>�jx�Q������F������I J�^l�n�Q*�^d�*�Q���ZQ
lam���,H%#�T	I��b�Q��$�Q��(�����Q��(���9�Y
O�F���hR|Tl�	��&�G��RbkS������(���4����(���9��>.6J�@�dA*�J H���2���Q�ZQ*lam��}��Q��(���9�Y
O�F)���J JR}tl�&�(I�1�Q&�>�T�K@}�>�V�[X��|�����Q������Fi��.���x�<����E����9�]/6J�@�t9�ZQlam���.��I J�^l�n�Q2R}\d�k�����H%"�	I�b�Q& >tA*e�C�����������rx22���s�@����R$%�>:6J�@�����(MQ��cc�L@}�>�V�[X��|�����(EQ���Xl�*�05]�/6L�����mV��7_L�����Y_��U7��y��f���Z��H�Z��Y��,���T
����Tx�I�Un�������������������:67Rn�QnFj�@Y��P���T���m\n&�QnFj�PS*7cq����G�������uJ�X����ur���7��3;=^���~I�PDn�9Z(�v�p������/dZ�5�S�A�*gf��)&�:�>������Ge}6&����d}�������)����d}���������P���T������6��XY��pxj������OIX�r����3��V��2S�9�3[_^����))�i��,LW'K��Y�Vx�Y����n�0��A��,�v��A�.�0�x�Y�a��f�0�NL�A#�0�(�T�*�����@����Y����0jBko���������K�l�(j�z�
_,������E!K_�v��V�\������Q�Yr�	���.����z�����/r�����b�}TK��J\��0�=�w�o"�Y��-� ��L���~xrr|�-��F�?e���b�5(��IT?^�^iZv�����E�{���B��s/8Wc�Z?mn}qc�����������y������`?�9�Z��l"���g���79�
cYn'���<_��TO=��6������{��w?��d����G���N����)������zL�����������3:~��=���|�]���*/�~n+����.�.m?��BA6xI'PD������K���"B���mRA���&�,��|��l�R�~��:^�R��x��R��
�G-�`������R
��`����T��67�T���X���"P�(�TD�:*�U}��>@/��"�j���3��R�V�H��3�})�3��,��_��fa
���r�Y��+�y��%m9^kf���,��ea6*���da��,����Yln&����,��>@#�0���T��������Y����0jB��^��|3�EYh��w$���R�{^��a-�*���5
��:|��Rs1n����(��^�����Y�k>4������m;�����z��^g>k�7cx/:�g1|T>�ac�YS����v��b��Y�_>k�67��bw"�Yc4e}��>@E�A��P����q>kH"��jJ����S��}�6��T(��<�J�Y�����w����l���c��~]l(x��/�=����wVd�~��?�B
�u��!Wp��������o|+���;}�]��m�7��f����pu�u� +G��$�rl���R3c���V��9�06E���u��V3�E')"��J1lL��Q;I1|)"�/E�Z��S���o�I"J�;)�����IB�G�U4�%n�K����LL��t��M�g��_)��rs{����������'����q�L	��"n���Uc���w�^�|�t\D�{�|����?�}�n��?���z�����M"�f%�A\,G�������m����>c��n����g�����{�����gv���*���x�;����B������O��������N��n��m���{�s���7=O�~v������=���
�A?���D��b��~N�������>����-�c����9>�3�����;r+��9����G�V2?������r���Z���g=�[��uvO6U����}�+B�ES'�����e'�E�>(�����/���@��j�z�A�k+����P��_����������3^�������,_*����Y����XW�>�lmF�P��6�����C4XB}r�=^CG4��{#c��o�+������!OT�	���
\N0��rB}[K
��LFV�=�c���4x���N:�E'��N�>j�����c�N��1|�����{�8����}�Zs��?�N	���}��>@e�*T"P��������h�X<b?�Sh$��6�������K]��&��m1U2]�h
�kN
��NSC�<�0#�\�k��)����j�^�|hl�����0K�K�
-/~7x���������E7����[0m��!�b��5W���_g�\� ��=6�;��-X�� ��	0K��,�>������Y���-�f	��bz:�9��/��V�^6�]E��hX\d��>?��	�v/��GW���*��~���X
��X�m]���Wo�6G�O�6�z�_�q8�~�~.����\d�w�������N� 
|Ch5��|���n8�qp��S�yv���������&m���.���!���G:.Bn���{��o���nw����f,a�h�L�����������S��5���j�rJ���������Y��V����N����|���77����il���gqj����?� ��*����C�x��b=���YN�x�U�8!����/p���l��ms1�|�)n����0L�8��c���E#����C�����
=]��<�[��?���w��^t��'��'���O��?!��?!��&�l�
����3���Of����>k?�e+��@Y��Y��n&$BM����_� X�*SpA�rY��o�~���������>Gg�_n�w�|�>}r�n������w�����u9<g��g��z���wo��z��Od��>:>}��0�Y�����q](�`"�_g��_�~v=�qx]|������~wvp�����m?��_���������ea�t�\Q��<<z���30Dj���I���������r+����������f�sz�c���=u����2Oh1b{/�U���������?�z�����W���{�e����1����|[�M�;8k�\�����{�}{�����7������`�zr1<���>����b_�*���Mj4���(���X�������x4�������b��/�.d���^<��]�3�5�������.�+?*fy��y�;7�Od��X�R���K���?0����������XF/�o.��������&�?~��UV�������EX���?�.���<��^8�X.��OA��Wt������{gI���)�E������xq���������MZ��<��_�F����?{�w'_.� �>
��;�o_�������?����V�jb���8:��������
�v����[��FW7��;��t��
�F����MQi��n��U�����
��sSS#F�?Mu�P�R�j��r�Tb.'�
�C�_���������-��3�W��@��Z������F������������m�cZk�Klf���g!��^��|�����|��^����pV��L��7G�Mih�M��;�QER1�evx�UR1j*F�&F�T���Q�
Fm���[y�	n�EL�z1'�:���%r��3��,��8��R�$���u�t���6�/�O�nn@5j�"onA�4H$����X�W��YA���;���.�_V
.bzl�����g��e���%���H�0�}!p�)�|��bh�*�e���}2���u��e����^�W��K���h<���dD/��d����������������~xo~_�~�s��O�}7�Z����q�)Guc������Ry���<��%I����yM�o���|�f5�B4[7y����mi�\M���ehqu��)]�T��$�d*�k9����,^0Q�G�
���o<n��_T��`5����<��qC��A���m&����\2a�����jK��]�6^V?�SA?PV����zR��v������Wf;�1��cg��'���V�(]�5�y�h�=�+�F,�',/�d�
��a��(�x�IS[��k%���b%d��vC!�[P1���*7�L����j��������=��x�I�M��>���?��>���Ya���������lZ�M�f�N5����4���"h�~64��f=[���
^���f���0K�F�}
*����x��?����i��us6�E:�`Wy�E��2]��X���l����:��b�u,����!'Y��'Kf|���'�� �,��{Y����2�0�3�-���P-�e��kSL6��N6#�����R������}M������Yh��`7\�c����X�sxa����4d�n(���Z���&G��]��	yC�uh0�����TX� ����^YrQ�^3
_M���7��������R�k�t�n[���SC��g�=V�����|�@����q�,��*v�O�m_|=��8�0�������)7�g����bO{v=���e�q8;�����m��������!:]��9�������Yr�����{�	�h�y�%75V�A30���"v�����(�}t��w���\5\�5��l��3�DS�W�`�,�U��_��=�5�j7���f�Xe��BD��n�:��+<"��q;�����Q�Cf74���~��\b^x_��s����} gv��/����![<�x�Y��,�fS?�"��S� Z��i<J��sp�L���j�������D�z��"��nJC>E("^|X��"���E��;D�����I3�!!�<5
h�TS�P7[��\7Ch1��x*
��M��n
������S��Y����A_��bV����7h�V@
����|�1��f`L�6�-��Bc�
�E���X�Z/��
��
X=U�n�P�s*E��0K��J��j���k�t�2,���J	,��d1=.����T�pf��z���%��[O
�\OU=��=j��Z�+�7\[U�-�6
61k�/�k��
LF�r2�����O9sV�9��d�1��:�����{���+��t��z��a��[h����bd��1k��a��c�u��?�)�d85TR��O��2�I�4�/8"e���GZ@��yjt�J�Y,x����4�-4�� Z����.���Y��y�e}l������-��WM��.=����5/7}���d���-���������M�D+�hSCi��-4��Z�4+����Z��V�e�b��<�i��
��������mt
:Qw�����K�������>������r}]s�r]������z�k��)�����)��6M:8�vp�S��R#�=�p�PdEL���e�"��UR�����I��Fi��� hVmh��I���B����B��j@��AU��K��p�b��$��K,�w?�����7�j�>��� ���� Z����.�.�j�sU�r�b�
�����`��V�x�����A�j����/�u\	���QT7�
'�1pT�!�� ���ZPy��A����*�b_T�TpI	<���)�
�n��x�J���me���w�)�G&[c*���^\,Y���'x��0K��Q�����^5>x��%,�o�}G��t������T�v�es�M�
1`vCo��p�'��`X�������S���|E��Ns#�Jx(�����%<�h%<Vqz��[+��x�/��_a%6K���D�(3����/kg��X�ovm��-��V
��u��I[d��Avz��S��X��)t���s���!K����b�>�|������lwf�7C��oN}{��|����W�������/8��>\����N���.k�����0��;������������VZ���q|X2�����F;V��Q��e�,U�T�S&cv�U���:.sZ0:��������E3U��-D�����-�6my���^���f��Zu��N���i{�7�Us.���7����m/�����O��%uOw��N����DC��������N1���et{u���h���zpw���j2A�l@����\���|������������T�������W?/^d?�����k�P�7l0�|�p;��W��n8����������SV��~�
��?>]}8����ww���>{/������t����Jc����p0:����`��o��?��{����K?��4������;�7w�*[�����.p}3h�������?�g�3��?��x48�����.�>^���
����5$6]��9?{[Y�7�]Y$�����6�
���@������{�x��k)e�z��/�����\������4!�N�a�]Y��k>��eh)������~����f�~�{���5+W��{��X���5�@�n���X�{k�6c��r���x���7��x��b����Po>��D��I��
o5�����
7��@E�A�}}��E�_�����>���o��0+M�c���%���ni4��#�o�������x^
�������/�^���PAo�o��6��D���i4��cd�3�UCK�����j�;�������{�uf]�
�E'Y����0lL��Q;����������u�����6���c�W�q������	e\6���2�����j��r��jh)��J.[�*@�3=��BU��s��V��Nt2�@��q��3��Q�w0�@l�c�p�&�3���#F��x��Pc��9Mi���BT�����+�����'��{3�?,L�o�?� p��!{��!����;1;����b��x)E�<���j|�Tk����7���.Qs"�(%��<6J�@���R�Fi��"1�-���[-bG������r�j��b��=Q��f��z�,z�����G8���L$0|�L��L%0j's	��d����8����Xa�s	�m<�`E�&�	M'\����8^���-��\�r�SZ�����T?���#�p����(�E7c
��k l�XQ�k��]�5����\���3u����6o0V�x�QSopqG�#��;������+�<�pOJ�}�{R�lu|[}��C�%��z�_�
<��,�����xC�&�5��z>����U��57�6hq&����%�$u�6<e�[���:���S����������)��QS���y���������\N�j':�[��=bj@m?��v0��G�+���.��M���0�zB�d�EB����&3��NZ�2������i+�[�r�Y`����/1��b��y�y������`����0>j���1cF�d�a+�����2���m�c����Q�l?�@�6�����\c����x��ED/����|��9�n�*|
wh��~�o��:k�,��nr�*�o�u�Rn��H��=���4N�@2{;�Q�&C"��6!�M�D�%�6!5�(�5�����R�Eh��g/��r�i��R�7
������9S��s�[���g9����g�G�0l�\�v2���0xs�8�����1�6��"����\�MG����p��5]�;�Z
���\U�e�X�^�X�4����F7�@����v3�@�.���X�,���u'l?f������3�`����D�F��c���;	|'j���,17�!����W�2\?��H. ���@������>!��e��%+d���0�[���X\9�{`�75�����b�^����Cf�M9����LQu������"�I8���Q!����e]��~���
�~.z����l�	��T��%6>Fu�0�)��lxs�l��7c�C6���j����6��?�e������.�g-����ha��������6�FM�����Yj�Q����Y��B7V���������������g��4[��9��������F�U���X]5D���b�w�cC��TCd�7��7"���!�yC�;���f
�5��b��
�
���5�:���[���[X�-�}��-H�_���������9f�����[v��7��������?mn����[`Qc���Q5h�~�(�����9z������1��=z,p��i8~��T?�Q7h�f��o��TF����(;o�o��h����yc��6��Pec<Jl�U��??_�n�]?1�p{7���o�������O������q���rt��L~�sj?�.�������_���E�bF�M*b�vls�`����G]��u��y�>���	YsoD��m�VFa�Arj#��;�Z��au��� ��h�9����av����#�����L:��@b�]�
���|�
��E(L���0�~�!��%"&+�LM5����4X9�
�$(���p<�q�?�������������������U�"���zk����sV���5���c1���%�bMU�R0���"���LVU����-4eo�yB�	���V�Bt
a�w�2�\����V�-��/FSy��7��C��g(3���&p^��f����.�������/k�]�G�64�\Vn��n�������)���a7�81|�6N���v���w������6]d��0��v�w����6����P�jB�8%>7����^����7J���_=)��b��n�p�}�)��������������z2�z���E"tw��~	���o�����|�I���G��P��3��}�^B�6b��EX��l4���f���[J�nC��� l����|��u���9l�7A*2,�5�����_�n��A'L�{29ZFZ�)������2B��������0y
a�%m����p���'�!4�tQ(�xy,������h���U�Fl���kU����+�/����
)��)��2�A�z�r��k]����u��jB�[�B�QkU���*���J���=.TU_�;X�:��L1��RA��E��[s[T2=�Q������^*V����/7�+j���0���oo�.�x��U��XQ�'���c����p����jx�=o$/>����&���,���3�19�H���R����������������?/���x���\������r'���0U���q}7�]���'��p��h�;��w���8L}us1�gvuV6��K�[}6;h�,IlI��|�S;����T�*����0�aq�dm�{'�~���>�z�?
�E�8��y��������?;������Y���X���"���b|�,+�\�
�o_�E��e������^`]�����+����x.�T?��X�4�V���v�WqG=��rx������6����y!�����/��f�)��rw��~�8����e��{�@.�n�<���tU|�"��]{�fM�}�������,<�d����������'�7)�{��0n�q��q����0>Y����?y������b���p�w�Y�r�����N�tx���������e�M����Y�q�3=w��q�s�w��~�Un���^4�x�K�ZX�E'k�>j����zc�N��1|�����o-����wM%�E,�k�(��G�
*�����>@u(����E�	s=���eh��K��7,��������68q��)I�{5��{��_/�}��4��S"h�w={}�����r��t�2�b:��Sy��[��<�[y��������[y��������>���v�sjr��djr��f�6f�Q;��b�f���Y��>����f����Y���-�e�,����>@E(�?��/bm?5�g2P���e�E���!bm<
`E�B�
g�L�75��61�R�=KT�zrX���echAA�6�5��R���m��������`4����/������9|O���A������\5�o�t^��?L�:o�|w�����\�B)�T�)�������x�����7���9?�������n��^�>ntJ�>^l$|W^\`vxrr|�q'��urxt��]�e7�����������������';X�?66��Q��N���x��R�
�G�9>���!X>X�?iJ��jF���E���w=T^�v1*�5�K6���`��b�0v�$	l�>[y��v`�����R����e4Y�:����������OE�9���2:67����X����(3v>-<u^\�>:��?���������`���{��i��9������q&��/���������g��y}��������)r��fn����B���n��i����/�n��!����e�/|�<�������v�Jf
�\��h�+���aW��3�OKo�u��=}����c(����]���7�&���~-��Z�Y�E�T�moOO�Z��������B�M�*�Q����y���R�7���|�p����J]����O�����y+�4�gi��i��2����EKG%������31I`n�>�e���v����v�h*_N]
�+�Y�
����>A�4���������b?�|���bo����"�)���8�
��y�;��pu����_M~���x�������\�?��)������l����?������W}����'��f����{���E�;��Z����+����.#��)�]�g�?��q����Y?+����������3+��X!(>����������/��?%r��^'3����s����n.|��vwF;������������)7�a��_{-�&�t�|ge�d<�2Y���f��RMw�c���w�??
���h�*�sZ�T�o�=��x�+e)��;���e���	�7��� ;-������?�e;�;����
F���g���;�Do>����e�12����88wu�����{���l'+!f?u}5����+K��X���?N=*��	������O�������p��{�����O~l��s/�����k���������TH�����4K0o��|������u{����m�r��r�_���"7���|Q$�3G��E�
��E'&�(0Vz�=�";���?��y����ue'�x~�����F���/�/�_=��6����owq<�{����y�,�V���<�4�-�ge3����W*����N�@��||
)������2!
�?^&��~����O���C��9�r=������?G���n��]��~�7����/�_�t�_���~�l<%'��S~������,;�\=l��k�T�����g����xxS���>�|]*�������������n~�c��lj{��=����v��G�T���d��5�/�����������qq��{�T��c��B���P��$*W���X�5�Jg�DL&t��H�����d����0<�@��V��&>O�;��f?����Y�P��B����������=����/���U|���O��w��w�_�+���)=/�Z����P�/�����?u04����KNC���<��~��.d��N��R�=���Y�;�z�/���:�i�	���
��>�^?��M���#�~}t�������gg���gW>��GU�{V�c4���(���H�e�i���9�]���������{�%��	���������I3��\������?����_,9�~���EV>E�PG��w&����>�}}R�o���f_�����~1��k����_��~�������y*��w������Y+[�c���'%���
�������y�E$���z������q��n�Mg��X������Yv��.�.��<o����w�	jFd��W6d8X'�n|7,�.]-�__=���<(���������n���[Nt5�`x��b��aO�k�
Y��<7�<�p��r��k���>���y�r��{R�y.�'��z�s�yn���<7�n�c�[?�5�Q�+r��]�D�����s���m�y�Rx�y��8+r�yn�y�y��y�����<�1�]�sU�y����a�y�/3�Ut�j7�1�m���Fy��y�����v�+|M`d�y��2�]-����z�s�s-�s��OT?���y�c�������\���|c��_f�k�<W�+��}�s���Y��7��j�kk��
���y�y��2�Ux�e�yn�y�e �U.	X���>���y�r��{R�y�j���N(��_d�[�3*�]����n�c�[?���Q��]%��.��p��9L��N4�]
/:���B�5M�s���.���|^���������#z��"p�
��������������G%��������:�o8��\����
�/A�{<��x~��p~��=�o8\uv�V���u���=���>��{3a��>��>��>��>��~O������/t}��~O������`���f�=
^t�����i9/���N�{Zo0p��T��������5o�xO�c�������=���.t��T�B���=-\����yn����Ku��=U��~O�px��Se;���$f������\���r��'�xO�c���������{���>�S���s]�{��U��<�S� �u���
���=U��=U�y�/�{�\��Ty$f������\���E�(4{���1�}�sW�\�u�[tB8�d��T�B�\M��E���#�xOU�<W7�s�%�=�{���S8�e��,�{�����s,����=U)������'�xO�c���������<�&���T�B�\G��,0?z���I����,0�H��*�j���A9���S�\�y.�2�����s���}Qu~��������w���}�sV�:�o��f�����d?a���d�w�H������b���������;G���<�$]�s�.�����9��s���s���9z�[\��r����z��=���q�.�u:��N'\{���t��t��t��tEO�|�N<m{\��E�����Z�����t���<����B�'�N�����t��eH��:�rx��tR�
������\��R����z�s���<��}�O6��}Q��<����P�y�/�I��l��4p�O�T����.U��B���v��R	���}Q)��h��2p��x�/�1�}�sW���]*������~�yn�}�
/R���������]�xg�����]��fJ�j����K-� �����sy��pj*��z�s���<����\����|�/����2N��Z�������y.�����������<��[Ae��E-����3����R�s�s��OT=����>���y��>�5�V��E�B�\E��/R��������Q�k��T�T�\U;�5��J����.�\�a��T�s��9�ny�����H�q�;f���������e�T��=rU��lp�K�NN��������aW��O������:����W��g��������y1E^�����D.�~��������&�������'�K�1+~}��=J���n�R���+���w��R��l����_��~Y�l�cx�����������p����|�������j��F��Y��Q�I�]���������_��0��n�|�O�[��	>���c��<&���W��>o����YR��o����?�s
]	��z��>���-S�����-;<99>yQ>�I�O��i6�
?g�^�������q������y���@g���0�Y������?��
�CW��u����7�'T��P���Y�_��'�.���Y��GS�����^��gZ����2������fWw��q�y4�+��n3k������9<mlX�i��<{����w35[w 8�e���`\�FF��5Em���ivq;�<�R�c���_��LG�����|����SL�c)6�5��&�k���|�=}u{};�{Z���������(�q4��|�3�����p|02`�������Xe��,���g�<+w����~���~������!�WM�W��9~���7���9jU�>�O��_yh���:x}����c����������K��z��f�Z'cL�5�8{�����U5_V�-��i��8Zn�|��������3�ZK=]�����|�G\&LS{�g�����_������k���R�������73������i��������N���~�{�^�}[�Z��bt*~{2@M��kt5������U������'��������df:�^�$_6f��������y��o��J`����c���+��*�:�-���W���1�l�H�B9����������z���O'������n�3������fMyw�;gwW������N���eCW�~���j�~�y�|�@={?��n��WO�-��goF���M^�u�������[�����UC���S��*�s���.��s�
�����W�������g�,]�;���R����l7�g1�d{�d�]��-�
�;������i%����-�����#�y�b�UtB����Jn'������Ar�)\�J��by�������@�����qS�GKa0����tt��Hn��tv��M����X8��z
d�
��~quw~{s3</�~��
��������E������s�l��k���\]_\F���~�
���]�����M~�li�������^�����\��0g���~u���Z��y)}#�
w��`M���se���x~��_������uR�>|�s�_�W��G�4k��>�w��5��W����J
#40Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#39)
Re: postgres_fdw vs. force_parallel_mode on ppc

Noah Misch <noah@leadboat.com> writes:

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Hm, is this with or without the ppc-related atomics fix you just found?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#41Noah Misch
noah@leadboat.com
In reply to: Tom Lane (#40)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Mon, Feb 15, 2016 at 06:07:48PM -0500, Tom Lane wrote:

Noah Misch <noah@leadboat.com> writes:

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Hm, is this with or without the ppc-related atomics fix you just found?

Without those. The ppc64 GNU/Linux configuration used gcc, though, and the
atomics change affects xlC only. Also, the postgres_fdw behavior does not
appear probabilistic; it failed twenty times in a row.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#42Robert Haas
robertmhaas@gmail.com
In reply to: Noah Misch (#39)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Mon, Feb 15, 2016 at 5:52 PM, Noah Misch <noah@leadboat.com> wrote:

On Mon, Feb 08, 2016 at 02:49:27PM -0500, Robert Haas wrote:

Well, what I've done is push into the buildfarm code that will allow
us to do *the most exhaustive* testing that I know how to do in an
automated fashion. Which is to create a file that says this:

force_parallel_mode=regress
max_parallel_degree=2

And then run this: make check-world TEMP_CONFIG=/path/to/aforementioned/file

Now, that is not going to find bugs in the deadlock.c portion of the
group locking patch, but it's been wildly successful in finding bugs
in other parts of the parallelism code, and there might well be a few
more that we haven't found yet, which is why I'm hoping that we'll get
this procedure running regularly either on all buildfarm machines, or
on some subset of them, or on new animals that just do this.

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Oh, crap. I didn't realize that TEMP_CONFIG didn't affect the contrib
test suites. Is there any reason for that, or is it just kinda where
we ended up?

Retrying it the way you did it, I see the same errors here, so I think
this isn't a PPC-specific problem, but just a problem in general.
I've actually seen these kinds of errors before in earlier versions of
the testing code that eventually became force_parallel_mode. I got
fooled into believing I'd fixed the problem because of my confusion
about how TEMP_CONFIG worked. I think this is more likely to be a bug
in force_parallel_mode than a bug in the code that checks whether a
normal parallel query is safe, but I'll have to track it down before I
can say for sure.

Thanks for testing this. It's not delightful to discover that I
muffed this, but better to find it now than in 6 months.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#43Noah Misch
noah@leadboat.com
In reply to: Robert Haas (#42)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Mon, Feb 15, 2016 at 07:31:40PM -0500, Robert Haas wrote:

On Mon, Feb 15, 2016 at 5:52 PM, Noah Misch <noah@leadboat.com> wrote:

On Mon, Feb 08, 2016 at 02:49:27PM -0500, Robert Haas wrote:

force_parallel_mode=regress
max_parallel_degree=2

And then run this: make check-world TEMP_CONFIG=/path/to/aforementioned/file

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Oh, crap. I didn't realize that TEMP_CONFIG didn't affect the contrib
test suites. Is there any reason for that, or is it just kinda where
we ended up?

To my knowledge, it's just the undesirable place we ended up.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#44Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#43)
Re: postgres_fdw vs. force_parallel_mode on ppc

Noah Misch <noah@leadboat.com> writes:

On Mon, Feb 15, 2016 at 07:31:40PM -0500, Robert Haas wrote:

Oh, crap. I didn't realize that TEMP_CONFIG didn't affect the contrib
test suites. Is there any reason for that, or is it just kinda where
we ended up?

To my knowledge, it's just the undesirable place we ended up.

Yeah. +1 for fixing that, if it's not unreasonably painful.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#45Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#44)
Re: postgres_fdw vs. force_parallel_mode on ppc

On 02/15/2016 07:57 PM, Tom Lane wrote:

Noah Misch <noah@leadboat.com> writes:

On Mon, Feb 15, 2016 at 07:31:40PM -0500, Robert Haas wrote:

Oh, crap. I didn't realize that TEMP_CONFIG didn't affect the contrib
test suites. Is there any reason for that, or is it just kinda where
we ended up?

To my knowledge, it's just the undesirable place we ended up.

Yeah. +1 for fixing that, if it's not unreasonably painful.

+1 for fixing it everywhere.

Historical note: back when TEMP_CONFIG was implemented, the main
regression set was just about the only set of tests the buildfarm ran
using a temp install. That wasn't even available for contrib and the
PLs, IIRC.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#46Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Peter Geoghegan (#36)
Re: a raft of parallelism-related bug fixes

On 2/8/16 4:39 PM, Peter Geoghegan wrote:

On Mon, Feb 8, 2016 at 2:35 PM, Andres Freund <andres@anarazel.de> wrote:

I think having a public git tree, that contains the current state, is
greatly helpful for that. Just announce that you're going to screw
wildly with history, and that you're not going to be terribly careful
about commit messages. That means observers can just do a fetch and a
reset --hard to see the absolutely latest and greatest. By all means
post a series to the list every now and then, but I think for minor
changes it's perfectly sane to say 'pull to see the fixups for the
issues you noticed'.

I would really like for there to be a way to do that more often. It
would be a significant time saver, because it removes problems with
minor bitrot.

Yeah, I think it's rather silly that we limit ourselves to only pushing
patches through a mailing list. That's OK (maybe even better) for simple
stuff, but once there's more than 1 patch it's a PITA.

There's an official github mirror of the code, ISTM it'd be good for
major features to get posted to github forks in their own branches. I
think that would also make it easy for buildfarm owners to run tests
against trusted forks/branches.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#47Craig Ringer
craig@2ndquadrant.com
In reply to: Joshua D. Drake (#21)
Re: a raft of parallelism-related bug fixes

On 9 February 2016 at 03:00, Joshua D. Drake <jd@commandprompt.com> wrote:

I think this further points to the need for more reviewers and less
feature pushes. There are fundamental features that we could use, this is
one of them. It is certainly more important than say pgLogical or BDR (not
that those aren't useful but that we do have external solutions for that
problem).

Well, with the pglogical and BDR work most of the work has been along
similar lines - getting the infrastructure in place. Commit timestamps,
logical decoding, and other features that are useful way beyond
pglogical/BDR. Logical decoding in particular is rapidly becoming a really
significant feature as people start to see the potential for it in
integration and ETL processes.

I'm not sure anyone takes the pglogical downstream submission as a serious
attempt at inclusion in 9.6, and even submitting the upstream was
significantly a RFC at least as far as 9.6 is concerned. I don't think the
downstream submission took any significant time or attention away from
other work.

The main result has been useful discussions on remaining pieces needed for
DDL replication etc and some greater awareness among others in the
community about what's going on in the area. I think that's a generally
useful thing.

Oh: another thing that I would like to do is commit the isolation

tests I wrote for the deadlock detector a while back, which nobody has
reviewed either, though Tom and Alvaro seemed reasonably positive
about the concept. Right now, the deadlock.c part of this patch isn't
tested at all by any of our regression test suites, because NOTHING in
deadlock.c is tested by any of our regression test suites. You can
blow it up with dynamite and the regression tests are perfectly happy,
and that's pretty scary.

Test test test. Please commit.

Yeah. Enhancing the isolation tests would be useful. Please commit those
changes. Even if they broke something in the isolation tester - which isn't
likely - forward movement in test infrastructure is important and we should
IMO have a lower bar for committing changes there. They won't directly
affect code end users are running.

I should resurrect Abhijit's patch to allow the isolationtester to talk to
multiple servers. We'll want that when we're doing tests like "assert that
this change isn't visible on the replica before it becomes visible on the
master". (Well, except we violate that one with our funky
synchronous_commit implementation...)

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#48Amit Langote
Langote_Amit_f8@lab.ntt.co.jp
In reply to: Craig Ringer (#47)
Re: a raft of parallelism-related bug fixes

On 2016/02/18 16:38, Craig Ringer wrote:

I should resurrect Abhijit's patch to allow the isolationtester to talk to
multiple servers. We'll want that when we're doing tests like "assert that
this change isn't visible on the replica before it becomes visible on the
master". (Well, except we violate that one with our funky
synchronous_commit implementation...)

How much does (or does not) that overlap with the recovery test suite work
undertaken by Michael et al? I saw some talk of testing for patches in
works on the N synchronous standbys thread.

Thanks,
Amit

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#49Michael Paquier
michael.paquier@gmail.com
In reply to: Amit Langote (#48)
Re: a raft of parallelism-related bug fixes

On Thu, Feb 18, 2016 at 5:35 PM, Amit Langote
<Langote_Amit_f8@lab.ntt.co.jp> wrote:

On 2016/02/18 16:38, Craig Ringer wrote:

I should resurrect Abhijit's patch to allow the isolationtester to talk to
multiple servers. We'll want that when we're doing tests like "assert that
this change isn't visible on the replica before it becomes visible on the
master". (Well, except we violate that one with our funky
synchronous_commit implementation...)

How much does (or does not) that overlap with the recovery test suite work
undertaken by Michael et al? I saw some talk of testing for patches in
works on the N synchronous standbys thread.

This sounds like poll_query_until in PostgresNode.pm (already on HEAD)
where the query used is something on pg_stat_replication for a given
LSN to see if a standby has reached a given replay position.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#50Craig Ringer
craig@2ndquadrant.com
In reply to: Michael Paquier (#49)
Re: a raft of parallelism-related bug fixes

On 18 February 2016 at 20:35, Michael Paquier <michael.paquier@gmail.com>
wrote:

On Thu, Feb 18, 2016 at 5:35 PM, Amit Langote
<Langote_Amit_f8@lab.ntt.co.jp> wrote:

On 2016/02/18 16:38, Craig Ringer wrote:

I should resurrect Abhijit's patch to allow the isolationtester to talk

to

multiple servers. We'll want that when we're doing tests like "assert

that

this change isn't visible on the replica before it becomes visible on

the

master". (Well, except we violate that one with our funky
synchronous_commit implementation...)

How much does (or does not) that overlap with the recovery test suite

work

undertaken by Michael et al? I saw some talk of testing for patches in
works on the N synchronous standbys thread.

This sounds like poll_query_until in PostgresNode.pm (already on HEAD)
where the query used is something on pg_stat_replication for a given
LSN to see if a standby has reached a given replay position.

No, it's quite different, though that's something handy to have that I've
emulated in the isolationtester using a plpgsql function.

The isolationtester changes in question allow isolationtester specs to run
different blocks against different hosts/ports/DBs.

That lets you make assertions about replication behaviour. It was built
for BDR and I think we'll need something along those lines in core if/when
any kind of logical replication facilities land, for things like testing
failover slots, etc.

The patch is at:

http://git.postgresql.org/gitweb/?p=2ndquadrant_bdr.git;a=commit;h=d859de3b13d39d4eddd91f3e6f316a48d31ee0fe

and might be something it's worth having in core as we expand testing of
replication, failover, etc.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#51Michael Paquier
michael.paquier@gmail.com
In reply to: Craig Ringer (#50)
Re: a raft of parallelism-related bug fixes

On Thu, Feb 18, 2016 at 9:45 PM, Craig Ringer <craig@2ndquadrant.com> wrote:

That lets you make assertions about replication behaviour. It was built for
BDR and I think we'll need something along those lines in core if/when any
kind of logical replication facilities land, for things like testing
failover slots, etc.

The patch is at:

http://git.postgresql.org/gitweb/?p=2ndquadrant_bdr.git;a=commit;h=d859de3b13d39d4eddd91f3e6f316a48d31ee0fe

and might be something it's worth having in core as we expand testing of
replication, failover, etc.

Maybe there is an advantage to have it, but that's hard to make an
opinion without a complicated test case. Both of those things could
clearly work with each other at first sight. PostgresNode can set up a
set of nodes and this patch would be in charge of more complex
scenarios where the same connection or transaction block is needed.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#52Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#26)
1 attachment(s)
Re: a raft of parallelism-related bug fixes

Robert Haas <robertmhaas@gmail.com> writes:

On Mon, Feb 8, 2016 at 2:36 PM, Joshua D. Drake <jd@commandprompt.com> wrote:

I have no problem running any test cases you wish on a branch in a loop for
the next week and reporting back any errors.

Well, what I've done is push into the buildfarm code that will allow
us to do *the most exhaustive* testing that I know how to do in an
automated fashion. Which is to create a file that says this:

force_parallel_mode=regress
max_parallel_degree=2

I did a few dozen runs of the core regression tests with those settings
(using current HEAD plus my lockGroupLeaderIdentifier-ectomy patch).
Roughly one time in ten, it fails in the stats test, with diffs as
attached. I interpret this as meaning that parallel workers don't
reliably transmit stats to the stats collector, though maybe there
is something else happening.

regards, tom lane

Attachments:

regression.diffstext/x-diff; charset=us-ascii; name=regression.diffsDownload
*** /home/postgres/pgsql/src/test/regress/expected/stats.out	Wed Mar  4 00:55:25 2015
--- /home/postgres/pgsql/src/test/regress/results/stats.out	Sun Feb 21 12:59:27 2016
***************
*** 148,158 ****
   WHERE relname like 'trunc_stats_test%' order by relname;
        relname      | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup 
  -------------------+-----------+-----------+-----------+------------+------------
!  trunc_stats_test  |         3 |         0 |         0 |          0 |          0
!  trunc_stats_test1 |         4 |         2 |         1 |          1 |          0
!  trunc_stats_test2 |         1 |         0 |         0 |          1 |          0
!  trunc_stats_test3 |         4 |         0 |         0 |          2 |          2
!  trunc_stats_test4 |         2 |         0 |         0 |          0 |          2
  (5 rows)
  
  SELECT st.seq_scan >= pr.seq_scan + 1,
--- 148,158 ----
   WHERE relname like 'trunc_stats_test%' order by relname;
        relname      | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup 
  -------------------+-----------+-----------+-----------+------------+------------
!  trunc_stats_test  |         0 |         0 |         0 |          0 |          0
!  trunc_stats_test1 |         0 |         0 |         0 |          0 |          0
!  trunc_stats_test2 |         0 |         0 |         0 |          0 |          0
!  trunc_stats_test3 |         0 |         0 |         0 |          0 |          0
!  trunc_stats_test4 |         0 |         0 |         0 |          0 |          0
  (5 rows)
  
  SELECT st.seq_scan >= pr.seq_scan + 1,

======================================================================

#53Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Noah Misch (#41)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Tue, Feb 16, 2016 at 12:12 PM, Noah Misch <noah@leadboat.com> wrote:

On Mon, Feb 15, 2016 at 06:07:48PM -0500, Tom Lane wrote:

Noah Misch <noah@leadboat.com> writes:

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Hm, is this with or without the ppc-related atomics fix you just found?

Without those. The ppc64 GNU/Linux configuration used gcc, though, and the
atomics change affects xlC only. Also, the postgres_fdw behavior does not
appear probabilistic; it failed twenty times in a row.

The postgres_fdw failure is a visibility-of-my-own-uncommitted-work
problem. The first command in a transaction updates a row via an FDW,
and then the SELECT expects to see the effects, but when run in a
background worker it creates a new FDW connection that can't see the
uncommitted UPDATE.

I wonder if parallelism of queries involving an FDW should not be
allowed if your transaction has written through the FDW.

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#54Robert Haas
robertmhaas@gmail.com
In reply to: Thomas Munro (#53)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Mon, Feb 22, 2016 at 11:02 AM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

On Tue, Feb 16, 2016 at 12:12 PM, Noah Misch <noah@leadboat.com> wrote:

On Mon, Feb 15, 2016 at 06:07:48PM -0500, Tom Lane wrote:

Noah Misch <noah@leadboat.com> writes:

I configured a copy of animal "mandrill" that way and launched a test run.
The postgres_fdw suite failed as attached. A manual "make -C contrib
installcheck" fails the same way on a ppc64 GNU/Linux box, but it passes on
x86_64 and aarch64. Since contrib test suites don't recognize TEMP_CONFIG,
check-world passes everywhere.

Hm, is this with or without the ppc-related atomics fix you just found?

Without those. The ppc64 GNU/Linux configuration used gcc, though, and the
atomics change affects xlC only. Also, the postgres_fdw behavior does not
appear probabilistic; it failed twenty times in a row.

The postgres_fdw failure is a visibility-of-my-own-uncommitted-work
problem. The first command in a transaction updates a row via an FDW,
and then the SELECT expects to see the effects, but when run in a
background worker it creates a new FDW connection that can't see the
uncommitted UPDATE.

I wonder if parallelism of queries involving an FDW should not be
allowed if your transaction has written through the FDW.

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#54)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

On Mon, Feb 22, 2016 at 11:02 AM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

The postgres_fdw failure is a visibility-of-my-own-uncommitted-work
problem. The first command in a transaction updates a row via an FDW,
and then the SELECT expects to see the effects, but when run in a
background worker it creates a new FDW connection that can't see the
uncommitted UPDATE.

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

I've not looked at the test case to see if this is exactly what's
going wrong, but it's pretty easy to see how there might be a problem:
consider a STABLE user-defined function that does a SELECT from a foreign
table. If that function call gets pushed down into a parallel worker
then it would fail as described.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#56Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#55)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

BTW, I wonder where you think that's supposed to be enforced, because
I sure can't find any such logic.

I suppose that has_parallel_hazard() would be the logical place to
notice foreign tables, but it currently doesn't even visit RTEs,
much less contain any code to check if their tables are foreign.
Or did you have another place in mind to do that?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#57Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Tom Lane (#55)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Tue, Feb 23, 2016 at 4:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

On Mon, Feb 22, 2016 at 11:02 AM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

The postgres_fdw failure is a visibility-of-my-own-uncommitted-work
problem. The first command in a transaction updates a row via an FDW,
and then the SELECT expects to see the effects, but when run in a
background worker it creates a new FDW connection that can't see the
uncommitted UPDATE.

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

I've not looked at the test case to see if this is exactly what's
going wrong, but it's pretty easy to see how there might be a problem:
consider a STABLE user-defined function that does a SELECT from a foreign
table. If that function call gets pushed down into a parallel worker
then it would fail as described.

I thought user defined functions were not a problem since it's the
user's responsibility to declare functions' parallel safety correctly.
The manual says: "In general, if a function is labeled as being safe
when it is restricted or unsafe, or if it is labeled as being
restricted when it is in fact unsafe, it may throw errors or produce
wrong answers when used in a parallel query"[1]http://www.postgresql.org/docs/devel/static/sql-createfunction.html. Uncommitted changes
on foreign tables are indeed invisible to functions declared as
PARALLEL SAFE, when run with force_parallel_mode = on,
max_parallel_degree = 2, but the default is UNSAFE and in that case
the containing query is never parallelised. Perhaps the documentation
could use a specific mention of this subtlety with FDWs in the
PARALLEL section?

The case of a plain old SELECT (as seen in the failing regression
test) is definitely a problem though and FDW access there needs to be
detected automatically. I also thought that
has_parallel_hazard_walker might be the right place for that logic, as
you suggested in your later message.

[1]: http://www.postgresql.org/docs/devel/static/sql-createfunction.html

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#57)
Re: postgres_fdw vs. force_parallel_mode on ppc

Thomas Munro <thomas.munro@enterprisedb.com> writes:

On Tue, Feb 23, 2016 at 4:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

I've not looked at the test case to see if this is exactly what's
going wrong, but it's pretty easy to see how there might be a problem:
consider a STABLE user-defined function that does a SELECT from a foreign
table. If that function call gets pushed down into a parallel worker
then it would fail as described.

I thought user defined functions were not a problem since it's the
user's responsibility to declare functions' parallel safety correctly.
The manual says: "In general, if a function is labeled as being safe
when it is restricted or unsafe, or if it is labeled as being
restricted when it is in fact unsafe, it may throw errors or produce
wrong answers when used in a parallel query"[1].

Hm. I'm not terribly happy with this its-the-users-problem approach to
things, mainly because I have little confidence that somebody couldn't
figure out a security exploit based on it.

The case of a plain old SELECT (as seen in the failing regression
test) is definitely a problem though and FDW access there needs to be
detected automatically.

Yes, the problem we're actually seeing in that regression test is not
dependent on a function wrapper.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#59Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#56)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Tue, Feb 23, 2016 at 2:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

BTW, I wonder where you think that's supposed to be enforced, because
I sure can't find any such logic.

I suppose that has_parallel_hazard() would be the logical place to
notice foreign tables, but it currently doesn't even visit RTEs,
much less contain any code to check if their tables are foreign.
Or did you have another place in mind to do that?

RTEs are checked in set_rel_consider_parallel(), and I thought there
was a check there related to foreign tables, but there isn't. Oops.
In view of 69d34408e5e7adcef8ef2f4e9c4f2919637e9a06, we shouldn't
blindly assume that foreign scans are not parallel-safe, but we can't
blindly assume the opposite either. Maybe we should assume that the
foreign scan is parallel-safe only if one or more of the new methods
introduced by the aforementioned commit are set, but actually that
doesn't seem quite right. That would tell us whether the scan itself
can be parallelized, not whether it's safe to run serially but within
a parallel worker. I think maybe we need a new FDW API that gets
called from set_rel_consider_parallel() with the root, rel, and rte as
arguments and which can return a Boolean. If the callback is not set,
assume false.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#60Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#59)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

On Tue, Feb 23, 2016 at 2:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

BTW, I wonder where you think that's supposed to be enforced, because
I sure can't find any such logic.

RTEs are checked in set_rel_consider_parallel(), and I thought there
was a check there related to foreign tables, but there isn't. Oops.

Even if there were, it would not fix this bug, because AFAICS the only
thing that set_rel_consider_parallel is chartered to do is set the
per-relation consider_parallel flag. The failure that is happening in
that regression test with force_parallel_mode turned on happens because
standard_planner plasters a Gather node at the top of the plan, causing
the whole plan including the FDW access to happen inside a parallel
worker. The only way to prevent that is to clear the
wholePlanParallelSafe flag, which as far as I can tell (not that any of
this is documented worth a damn) isn't something that
set_rel_consider_parallel is supposed to do.

It looks to me like there is a good deal of fuzzy thinking here about the
difference between locally parallelizable and globally parallelizable
plans, ie Gather at the top vs Gather somewhere else. I also note with
dismay that turning force_parallel_mode on seems to pretty much disable
any testing of local parallelism.

In view of 69d34408e5e7adcef8ef2f4e9c4f2919637e9a06, we shouldn't
blindly assume that foreign scans are not parallel-safe, but we can't
blindly assume the opposite either. Maybe we should assume that the
foreign scan is parallel-safe only if one or more of the new methods
introduced by the aforementioned commit are set, but actually that
doesn't seem quite right. That would tell us whether the scan itself
can be parallelized, not whether it's safe to run serially but within
a parallel worker. I think maybe we need a new FDW API that gets
called from set_rel_consider_parallel() with the root, rel, and rte as
arguments and which can return a Boolean. If the callback is not set,
assume false.

Meh. As things stand, postgres_fdw would have to aver that it can't ever
be safely parallelized, which doesn't seem like a very satisfactory answer
even if there are other FDWs that work differently (and which would those
be? None that use a socket-style connection to an external server.)

The commit you mention above seems to me to highlight the dangers of
accepting hook patches with no working use-case to back them up.
AFAICT it's basically useless for typical FDWs because of this
multiple-connection problem.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#61Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#60)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Tue, Feb 23, 2016 at 11:47 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Even if there were, it would not fix this bug, because AFAICS the only
thing that set_rel_consider_parallel is chartered to do is set the
per-relation consider_parallel flag. The failure that is happening in
that regression test with force_parallel_mode turned on happens because
standard_planner plasters a Gather node at the top of the plan, causing
the whole plan including the FDW access to happen inside a parallel
worker. The only way to prevent that is to clear the
wholePlanParallelSafe flag, which as far as I can tell (not that any of
this is documented worth a damn) isn't something that
set_rel_consider_parallel is supposed to do.

Hmm. Well, if you tested it, or looked at the places where
wholePlanParallelSafe is cleared, you would find that it DOES fix the
bug. create_plan() clears wholePlanParallelSafe if the plan is not
parallel-safe, and the plan won't be parallel-safe unless
consider_parallel was set for the underlying relation. In case you'd
like to test it for yourself, here's the PoC patch I wrote:

diff --git a/src/backend/optimizer/path/allpaths.c
b/src/backend/optimizer/path/allpaths.c
index bcb668f..8a4179e 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -527,6 +527,11 @@ set_rel_consider_parallel(PlannerInfo *root,
RelOptInfo *rel,
                                        return;
                                return;
                        }
+
+                       /* Not for foreign tables. */
+                       if (rte->relkind == RELKIND_FOREIGN_TABLE)
+                               return;
+
                        break;

case RTE_SUBQUERY:

Adding that makes the postgres_fdw case pass.

It looks to me like there is a good deal of fuzzy thinking here about the
difference between locally parallelizable and globally parallelizable
plans, ie Gather at the top vs Gather somewhere else.

If you have a specific complaint, I'm happy to try to improve things,
or you can. I think however that it is also possible that you haven't
fully understood the code I've spent the last year or so developing
yet, possibly because I haven't documented it well enough, but
possibly also because you haven't spent much time looking on it yet.
I'm glad you are, by the way, because I'm sure there are a bunch of
things here that you can improve over what I was able to do,
especially on the planner side of things, and that would be great.
However, a bit of forbearance would be appreciated.

I also note with
dismay that turning force_parallel_mode on seems to pretty much disable
any testing of local parallelism.

No, I don't think so. It doesn't push a Gather node on top of a plan
that already contains a Gather, because such a plan isn't
parallel_safe. Nor does it suppress generation of parallel paths
otherwise.

In view of 69d34408e5e7adcef8ef2f4e9c4f2919637e9a06, we shouldn't
blindly assume that foreign scans are not parallel-safe, but we can't
blindly assume the opposite either. Maybe we should assume that the
foreign scan is parallel-safe only if one or more of the new methods
introduced by the aforementioned commit are set, but actually that
doesn't seem quite right. That would tell us whether the scan itself
can be parallelized, not whether it's safe to run serially but within
a parallel worker. I think maybe we need a new FDW API that gets
called from set_rel_consider_parallel() with the root, rel, and rte as
arguments and which can return a Boolean. If the callback is not set,
assume false.

Meh. As things stand, postgres_fdw would have to aver that it can't ever
be safely parallelized, which doesn't seem like a very satisfactory answer
even if there are other FDWs that work differently (and which would those
be? None that use a socket-style connection to an external server.)

file_fdw is parallel-safe, and KaiGai posted a patch that makes it
parallel-aware, though that would have needed more work than I'm
willing to put in right now to make it committable. So in other
words...

The commit you mention above seems to me to highlight the dangers of
accepting hook patches with no working use-case to back them up.
AFAICT it's basically useless for typical FDWs because of this
multiple-connection problem.

...I didn't ignore this principal.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#62Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Robert Haas (#59)
2 attachment(s)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Tue, Feb 23, 2016 at 6:45 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Feb 23, 2016 at 2:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Foreign tables are supposed to be categorically excluded from
parallelism. Not sure why that's not working in this instance.

BTW, I wonder where you think that's supposed to be enforced, because
I sure can't find any such logic.

I suppose that has_parallel_hazard() would be the logical place to
notice foreign tables, but it currently doesn't even visit RTEs,
much less contain any code to check if their tables are foreign.
Or did you have another place in mind to do that?

RTEs are checked in set_rel_consider_parallel(), and I thought there
was a check there related to foreign tables, but there isn't. Oops.
In view of 69d34408e5e7adcef8ef2f4e9c4f2919637e9a06, we shouldn't
blindly assume that foreign scans are not parallel-safe, but we can't
blindly assume the opposite either. Maybe we should assume that the
foreign scan is parallel-safe only if one or more of the new methods
introduced by the aforementioned commit are set, but actually that
doesn't seem quite right. That would tell us whether the scan itself
can be parallelized, not whether it's safe to run serially but within
a parallel worker. I think maybe we need a new FDW API that gets
called from set_rel_consider_parallel() with the root, rel, and rte as
arguments and which can return a Boolean. If the callback is not set,
assume false.

Here is a first pass at that. The patch adds
IsForeignScanParallelSafe to the FDW API. postgres_fdw returns false
(unnecessary but useful to verify that the regression test breaks if
you change it to true), and others don't provide the function so fall
back to false.

I suspect there may be opportunities to return true even if snapshots
and uncommitted reads aren't magically coordinated among workers. For
example: users of MongoDB-type systems and text files have no
expectation of either snapshot semantics or transaction isolation in
the first place, so doing stuff in parallel won't be any less safe
than usual as far as visibility is concerned; postgres_fdw could in
theory export/import snapshots and allow parallelism in limited cases
if it can somehow prove there have been no uncommitted writes; and
non-MVCC/snapshot RDBMSs might be OK in lower isolation levels if you
haven't written anything or have explicitly opted in to uncommitted
reads (otherwise you'd risk invisible deadlock against the leader when
trying to read what it has written).

Please also find attached a tiny patch to respect TEMP_CONFIG for contribs.

--
Thomas Munro
http://www.enterprisedb.com

Attachments:

temp-config-for-contribs.patchapplication/octet-stream; name=temp-config-for-contribs.patchDownload
diff --git a/contrib/contrib-global.mk b/contrib/contrib-global.mk
index 6ac8e9b..ba49610 100644
--- a/contrib/contrib-global.mk
+++ b/contrib/contrib-global.mk
@@ -1,4 +1,9 @@
 # contrib/contrib-global.mk
 
+# file with extra config for temp build
+ifdef TEMP_CONFIG
+REGRESS_OPTS += --temp-config=$(TEMP_CONFIG)
+endif
+
 NO_PGXS = 1
 include $(top_srcdir)/src/makefiles/pgxs.mk
fdw-parallel-safe-api.patchapplication/octet-stream; name=fdw-parallel-safe-api.patchDownload
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index d79e4cc..9de1129 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -283,6 +283,8 @@ static void postgresGetForeignJoinPaths(PlannerInfo *root,
 							JoinPathExtraData *extra);
 static bool postgresRecheckForeignScan(ForeignScanState *node,
 						   TupleTableSlot *slot);
+static bool postgresIsForeignScanParallelSafe(PlannerInfo *root, RelOptInfo *rel,
+											  RangeTblEntry *rte);
 static List *get_useful_pathkeys_for_relation(PlannerInfo *root,
 								 RelOptInfo *rel);
 static List *get_useful_ecs_for_relation(PlannerInfo *root, RelOptInfo *rel);
@@ -350,6 +352,7 @@ postgres_fdw_handler(PG_FUNCTION_ARGS)
 	routine->IterateForeignScan = postgresIterateForeignScan;
 	routine->ReScanForeignScan = postgresReScanForeignScan;
 	routine->EndForeignScan = postgresEndForeignScan;
+	routine->IsForeignScanParallelSafe = postgresIsForeignScanParallelSafe;
 
 	/* Functions for updating foreign tables */
 	routine->AddForeignUpdateTargets = postgresAddForeignUpdateTargets;
@@ -2055,6 +2058,22 @@ postgresExplainForeignModify(ModifyTableState *mtstate,
 	}
 }
 
+/*
+ * postgresIsForeignScanParallelSafe
+ * 		Check if parallel scans are safe.
+ *
+ * In theory we could answer yes if we could somehow arrange to use an
+ * exported snapshot shared by backends, and we somehow knew that there could
+ * be no uncommitted changes in the lead process's transaction.  Only then
+ * could we be sure that a worker with its own FDW connection has the same
+ * view of the remote database as the leader.
+ */
+static bool
+postgresIsForeignScanParallelSafe(PlannerInfo *root, RelOptInfo *rel,
+								  RangeTblEntry *rte)
+{
+	return false;
+}
 
 /*
  * estimate_path_cost_size
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index bcb668f..99b315c 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -527,6 +527,21 @@ set_rel_consider_parallel(PlannerInfo *root, RelOptInfo *rel,
 					return;
 				return;
 			}
+
+			/*
+			 * Ask FDWs if they support pushing down scans.  Parallel workers
+			 * create separate FDW connections which may not be appropriately
+			 * coordinated between workers and the leader, so we default to
+			 * assuming no unless the FDW explicitly tells us otherwise.
+			 */
+			if (rte->relkind == RELKIND_FOREIGN_TABLE)
+			{
+				Assert(rel->fdwroutine);
+				if (!rel->fdwroutine->IsForeignScanParallelSafe)
+					return;
+				if (!rel->fdwroutine->IsForeignScanParallelSafe(root, rel, rte))
+					return;
+			}
 			break;
 
 		case RTE_SUBQUERY:
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 9fafab0..0c8251e 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -131,6 +131,10 @@ typedef void (*InitializeDSMForeignScan_function) (ForeignScanState *node,
 typedef void (*InitializeWorkerForeignScan_function) (ForeignScanState *node,
 													  shm_toc *toc,
 													  void *coordinate);
+typedef bool (*IsForeignScanParallelSafe_function) (PlannerInfo *root,
+													RelOptInfo *rel,
+													RangeTblEntry *rte);
+
 /*
  * FdwRoutine is the struct returned by a foreign-data wrapper's handler
  * function.  It provides pointers to the callback functions needed by the
@@ -188,6 +192,7 @@ typedef struct FdwRoutine
 	ImportForeignSchema_function ImportForeignSchema;
 
 	/* Support functions for parallelism under Gather node */
+	IsForeignScanParallelSafe_function IsForeignScanParallelSafe;
 	EstimateDSMForeignScan_function EstimateDSMForeignScan;
 	InitializeDSMForeignScan_function InitializeDSMForeignScan;
 	InitializeWorkerForeignScan_function InitializeWorkerForeignScan;
#63Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Thomas Munro (#62)
1 attachment(s)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Wed, Feb 24, 2016 at 5:48 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

Here is a first pass at that. [...]

On Wed, Feb 24, 2016 at 1:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:

file_fdw is parallel-safe, ...

And here is a patch to apply on top of the last one, to make file_fdw
return true. But does it really work correctly under parallelism?

--
Thomas Munro
http://www.enterprisedb.com

Attachments:

file-fdw-parallel-safe.patchapplication/octet-stream; name=file-fdw-parallel-safe.patchDownload
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index cf12710..6dbb4d0 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/contrib/file_fdw/file_fdw.c
@@ -131,6 +131,8 @@ static void fileEndForeignScan(ForeignScanState *node);
 static bool fileAnalyzeForeignTable(Relation relation,
 						AcquireSampleRowsFunc *func,
 						BlockNumber *totalpages);
+static bool fileIsForeignScanParallelSafe(PlannerInfo *root, RelOptInfo *rel,
+										  RangeTblEntry *rte);
 
 /*
  * Helper functions
@@ -170,6 +172,7 @@ file_fdw_handler(PG_FUNCTION_ARGS)
 	fdwroutine->ReScanForeignScan = fileReScanForeignScan;
 	fdwroutine->EndForeignScan = fileEndForeignScan;
 	fdwroutine->AnalyzeForeignTable = fileAnalyzeForeignTable;
+	fdwroutine->IsForeignScanParallelSafe = fileIsForeignScanParallelSafe;
 
 	PG_RETURN_POINTER(fdwroutine);
 }
@@ -762,6 +765,17 @@ fileAnalyzeForeignTable(Relation relation,
 }
 
 /*
+ * fileIsForeignScanParallelSafe
+ * 		Check if parallel scans are safe.
+ */
+static bool
+fileIsForeignScanParallelSafe(PlannerInfo *root, RelOptInfo *rel,
+								  RangeTblEntry *rte)
+{
+	return true;
+}
+
+/*
  * check_selective_binary_conversion
  *
  * Check to see if it's useful to convert only a subset of the file's columns
#64Robert Haas
robertmhaas@gmail.com
In reply to: Thomas Munro (#63)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Wed, Feb 24, 2016 at 12:59 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

On Wed, Feb 24, 2016 at 5:48 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

Here is a first pass at that. [...]

On Wed, Feb 24, 2016 at 1:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:

file_fdw is parallel-safe, ...

And here is a patch to apply on top of the last one, to make file_fdw
return true. But does it really work correctly under parallelism?

Seems like it. Running the regression tests for file_fdw under
force_parallel_mode=regress, max_parallel_degree>0 passes; you can
verify that's actually doing something by using
force_parallel_mode=on, which will result in some predictable
failures. From a theoretical point of view, there's no reason I can
see why reading a file shouldn't work just as well from a parallel
worker as from the leader. They both have the same view of the
filesystem, and in neither case are we trying to write any data; we're
just trying to read it.

Committed these patches after revising the comment you wrote and
adding documentation.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#65Noah Misch
noah@leadboat.com
In reply to: Robert Haas (#64)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Fri, Feb 26, 2016 at 04:16:58PM +0530, Robert Haas wrote:

Committed these patches after revising the comment you wrote and
adding documentation.

I've modified buildfarm member mandrill to use force_parallel_mode=regress and
max_parallel_degree=5; a full run passes. We'll now see if it intermittently
fails the stats test, like Tom witnessed:
/messages/by-id/30385.1456077923@sss.pgh.pa.us

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#66Robert Haas
robertmhaas@gmail.com
In reply to: Noah Misch (#65)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Sat, Feb 27, 2016 at 7:05 PM, Noah Misch <noah@leadboat.com> wrote:

On Fri, Feb 26, 2016 at 04:16:58PM +0530, Robert Haas wrote:

Committed these patches after revising the comment you wrote and
adding documentation.

I've modified buildfarm member mandrill to use force_parallel_mode=regress and
max_parallel_degree=5; a full run passes. We'll now see if it intermittently
fails the stats test, like Tom witnessed:
/messages/by-id/30385.1456077923@sss.pgh.pa.us

Thank you.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#67Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#65)
Re: postgres_fdw vs. force_parallel_mode on ppc

Noah Misch <noah@leadboat.com> writes:

I've modified buildfarm member mandrill to use force_parallel_mode=regress and
max_parallel_degree=5; a full run passes. We'll now see if it intermittently
fails the stats test, like Tom witnessed:
/messages/by-id/30385.1456077923@sss.pgh.pa.us

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&amp;dt=2016-03-02%2023%3A34%3A10

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#68Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#67)
1 attachment(s)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Thu, Mar 3, 2016 at 1:10 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Noah Misch <noah@leadboat.com> writes:

I've modified buildfarm member mandrill to use force_parallel_mode=regress and
max_parallel_degree=5; a full run passes. We'll now see if it intermittently
fails the stats test, like Tom witnessed:
/messages/by-id/30385.1456077923@sss.pgh.pa.us

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&amp;dt=2016-03-02%2023%3A34%3A10

A couple of my colleagues have been looking into this. It's not
entirely clear to me what's going on here yet, but it looks like the
stats get there if you wait long enough. Rahila Syed was able to
reproduce the problem and says that the attached patch fixes it. But
I don't quite understand why this should fix it.

BTW, this comment is obsolete:

-- force the rate-limiting logic in pgstat_report_tabstat() to time out
-- and send a message
SELECT pg_sleep(1.0);
pg_sleep
----------

(1 row)

That function was renamed in commit
93c701edc6c6f065cd25f77f63ab31aff085d6ac, but apparently Tom forgot to
grep for other uses of that identifier name.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

wait-for-trunc-stats.patchapplication/x-download; name=wait-for-trunc-stats.patchDownload
diff --git a/src/test/regress/expected/stats.out b/src/test/regress/expected/stats.out
index f5be70f..9969d91 100644
--- a/src/test/regress/expected/stats.out
+++ b/src/test/regress/expected/stats.out
@@ -68,6 +68,32 @@ CREATE TABLE trunc_stats_test1(id serial);
 CREATE TABLE trunc_stats_test2(id serial);
 CREATE TABLE trunc_stats_test3(id serial);
 CREATE TABLE trunc_stats_test4(id serial);
+create function wait_for_trunc_stats() returns void as $$
+declare
+  start_time timestamptz := clock_timestamp();
+  updated bool;
+begin
+  -- we don't want to wait forever; loop will exit after 30 seconds
+  for i in 1 .. 300 loop
+
+    SELECT (n_tup_ins > 0) INTO updated
+      FROM pg_stat_user_tables WHERE relname ='trunc_stats_test';
+
+    exit when updated;
+
+    -- wait a little
+    perform pg_sleep(0.1);
+
+    -- reset stats snapshot so we can test again
+    perform pg_stat_clear_snapshot();
+
+  end loop;
+
+  -- report time waited in postmaster log (where it won't change test output)
+  raise log 'wait_for_trunc_stats delayed % seconds',
+    extract(epoch from clock_timestamp() - start_time);
+end
+$$ language plpgsql;
 -- check that n_live_tup is reset to 0 after truncate
 INSERT INTO trunc_stats_test DEFAULT VALUES;
 INSERT INTO trunc_stats_test DEFAULT VALUES;
@@ -142,6 +168,12 @@ SELECT wait_for_stats();
  
 (1 row)
 
+SELECT wait_for_trunc_stats();
+ wait_for_trunc_stats 
+----------------------
+ 
+(1 row)
+
 -- check effects
 SELECT relname, n_tup_ins, n_tup_upd, n_tup_del, n_live_tup, n_dead_tup
   FROM pg_stat_user_tables
@@ -183,4 +215,5 @@ FROM prevstats AS pr;
 (1 row)
 
 DROP TABLE trunc_stats_test, trunc_stats_test1, trunc_stats_test2, trunc_stats_test3, trunc_stats_test4;
+DROP FUNCTION wait_for_trunc_stats();
 -- End of Stats Test
diff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql
index cd2d592..fc045ed 100644
--- a/src/test/regress/sql/stats.sql
+++ b/src/test/regress/sql/stats.sql
@@ -65,6 +65,33 @@ CREATE TABLE trunc_stats_test2(id serial);
 CREATE TABLE trunc_stats_test3(id serial);
 CREATE TABLE trunc_stats_test4(id serial);
 
+create function wait_for_trunc_stats() returns void as $$
+declare
+  start_time timestamptz := clock_timestamp();
+  updated bool;
+begin
+  -- we don't want to wait forever; loop will exit after 30 seconds
+  for i in 1 .. 300 loop
+
+    SELECT (n_tup_ins > 0) INTO updated
+      FROM pg_stat_user_tables WHERE relname ='trunc_stats_test';
+
+    exit when updated;
+
+    -- wait a little
+    perform pg_sleep(0.1);
+
+    -- reset stats snapshot so we can test again
+    perform pg_stat_clear_snapshot();
+
+  end loop;
+
+  -- report time waited in postmaster log (where it won't change test output)
+  raise log 'wait_for_trunc_stats delayed % seconds',
+    extract(epoch from clock_timestamp() - start_time);
+end
+$$ language plpgsql;
+
 -- check that n_live_tup is reset to 0 after truncate
 INSERT INTO trunc_stats_test DEFAULT VALUES;
 INSERT INTO trunc_stats_test DEFAULT VALUES;
@@ -128,6 +155,7 @@ SELECT pg_sleep(1.0);
 -- wait for stats collector to update
 SELECT wait_for_stats();
 
+SELECT wait_for_trunc_stats();
 -- check effects
 SELECT relname, n_tup_ins, n_tup_upd, n_tup_del, n_live_tup, n_dead_tup
   FROM pg_stat_user_tables
@@ -149,4 +177,5 @@ SELECT pr.snap_ts < pg_stat_get_snapshot_timestamp() as snapshot_newer
 FROM prevstats AS pr;
 
 DROP TABLE trunc_stats_test, trunc_stats_test1, trunc_stats_test2, trunc_stats_test3, trunc_stats_test4;
+DROP FUNCTION wait_for_trunc_stats();
 -- End of Stats Test
#69Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#68)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

A couple of my colleagues have been looking into this. It's not
entirely clear to me what's going on here yet, but it looks like the
stats get there if you wait long enough. Rahila Syed was able to
reproduce the problem and says that the attached patch fixes it. But
I don't quite understand why this should fix it.

I don't like this patch much. While the new function is not bad in
itself, it looks really weird to call it immediately after the other
wait function. And the reason for that, AFAICT, is that somebody dropped
the entire "truncation stats" test sequence into the middle of unrelated
tests, evidently in the vain hope that that way they could piggyback
on the existing wait. Which these failures are showing us is wrong.

I think we should move all the inserted logic down so that it's not in the
middle of unrelated testing.

BTW, this comment is obsolete:

-- force the rate-limiting logic in pgstat_report_tabstat() to time out
-- and send a message
SELECT pg_sleep(1.0);
pg_sleep
----------

(1 row)

That function was renamed in commit
93c701edc6c6f065cd25f77f63ab31aff085d6ac, but apparently Tom forgot to
grep for other uses of that identifier name.

Duh :-(. Actually, do we need that sleep at all anymore? Seems like
wait_for_stats ought to cover it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#70Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#69)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Fri, Mar 4, 2016 at 12:46 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

A couple of my colleagues have been looking into this. It's not
entirely clear to me what's going on here yet, but it looks like the
stats get there if you wait long enough. Rahila Syed was able to
reproduce the problem and says that the attached patch fixes it. But
I don't quite understand why this should fix it.

I don't like this patch much. While the new function is not bad in
itself, it looks really weird to call it immediately after the other
wait function. And the reason for that, AFAICT, is that somebody dropped
the entire "truncation stats" test sequence into the middle of unrelated
tests, evidently in the vain hope that that way they could piggyback
on the existing wait. Which these failures are showing us is wrong.

I think we should move all the inserted logic down so that it's not in the
middle of unrelated testing.

Sure. If you have an idea what the right thing to do is, please go
ahead. I actually don't have a clear idea what's going on here. I
guess it's that the wait_for_stats() guarantees that the stats message
from the index insertion has been received but the status messages
from the "trunc" tables might not have gotten there yet. I thought
maybe that works without parallelism because all of those messages are
coming from the same backend, and therefore if you have the later one
you must have all of the earlier ones, too. But if you're running
some of the queries in parallel workers then it's possible for a stats
message from a worker run later to arrive later.

But it's not that after all, because when I run the regression tests
with the pg_sleep removed, I get this:

*** /Users/rhaas/pgsql/src/test/regress/expected/stats.out
2016-03-04 08:55:33.000000000 -0500
--- /Users/rhaas/pgsql/src/test/regress/results/stats.out
2016-03-04 09:00:29.000000000 -0500
***************
*** 127,140 ****
       1
  (1 row)
- -- force the rate-limiting logic in pgstat_report_tabstat() to time out
- -- and send a message
- SELECT pg_sleep(1.0);
-  pg_sleep
- ----------
-
- (1 row)
-
  -- wait for stats collector to update
  SELECT wait_for_stats();
   wait_for_stats
--- 127,132 ----
***************
*** 148,158 ****
   WHERE relname like 'trunc_stats_test%' order by relname;
        relname      | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup
| n_dead_tup
  -------------------+-----------+-----------+-----------+------------+------------
!  trunc_stats_test  |         3 |         0 |         0 |          0
|          0
!  trunc_stats_test1 |         4 |         2 |         1 |          1
|          0
!  trunc_stats_test2 |         1 |         0 |         0 |          1
|          0
!  trunc_stats_test3 |         4 |         0 |         0 |          2
|          2
!  trunc_stats_test4 |         2 |         0 |         0 |          0
|          2
  (5 rows)
  SELECT st.seq_scan >= pr.seq_scan + 1,
--- 140,150 ----
   WHERE relname like 'trunc_stats_test%' order by relname;
        relname      | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup
| n_dead_tup
  -------------------+-----------+-----------+-----------+------------+------------
!  trunc_stats_test  |         0 |         0 |         0 |          0
|          0
!  trunc_stats_test1 |         0 |         0 |         0 |          0
|          0
!  trunc_stats_test2 |         0 |         0 |         0 |          0
|          0
!  trunc_stats_test3 |         0 |         0 |         0 |          0
|          0
!  trunc_stats_test4 |         0 |         0 |         0 |          0
|          0
  (5 rows)

SELECT st.seq_scan >= pr.seq_scan + 1,
***************
*** 163,169 ****
WHERE st.relname='tenk2' AND cl.relname='tenk2';
?column? | ?column? | ?column? | ?column?
----------+----------+----------+----------
! t | t | t | t
(1 row)

  SELECT st.heap_blks_read + st.heap_blks_hit >= pr.heap_blks + cl.relpages,
--- 155,161 ----
   WHERE st.relname='tenk2' AND cl.relname='tenk2';
   ?column? | ?column? | ?column? | ?column?
  ----------+----------+----------+----------
!  f        | f        | f        | f
  (1 row)

SELECT st.heap_blks_read + st.heap_blks_hit >= pr.heap_blks + cl.relpages,
***************
*** 172,178 ****
WHERE st.relname='tenk2' AND cl.relname='tenk2';
?column? | ?column?
----------+----------
! t | t
(1 row)

  SELECT pr.snap_ts < pg_stat_get_snapshot_timestamp() as snapshot_newer
--- 164,170 ----
   WHERE st.relname='tenk2' AND cl.relname='tenk2';
   ?column? | ?column?
  ----------+----------
!  t        | f
  (1 row)

SELECT pr.snap_ts < pg_stat_get_snapshot_timestamp() as snapshot_newer

That looks suspiciously similar to the failure we're getting with the
force_parallel_mode testing, but I'm still confused.

BTW, this comment is obsolete:

-- force the rate-limiting logic in pgstat_report_tabstat() to time out
-- and send a message
SELECT pg_sleep(1.0);
pg_sleep
----------

(1 row)

That function was renamed in commit
93c701edc6c6f065cd25f77f63ab31aff085d6ac, but apparently Tom forgot to
grep for other uses of that identifier name.

Duh :-(. Actually, do we need that sleep at all anymore? Seems like
wait_for_stats ought to cover it.

Yeah.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#71Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#70)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

Sure. If you have an idea what the right thing to do is, please go
ahead.

Yeah, I'll modify the patch and commit sometime later today.

I actually don't have a clear idea what's going on here. I
guess it's that the wait_for_stats() guarantees that the stats message
from the index insertion has been received but the status messages
from the "trunc" tables might not have gotten there yet.

That's what it looks like to me. I now think that the apparent
connection to parallel query is a mirage. The reason we've only
seen a few cases so far is that the flapping test is new: it
wad added in commit d42358efb16cc811, on 20 Feb. If we left it
as-is, I think we'd eventually see the same failure without forcing
parallel mode. In fact, that's pretty much what you describe below,
isn't it? The pg_sleep is sort of half-bakedly substituting for
a proper wait.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#72Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#71)
Re: postgres_fdw vs. force_parallel_mode on ppc

Tom Lane wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Sure. If you have an idea what the right thing to do is, please go
ahead.

Yeah, I'll modify the patch and commit sometime later today.

I actually don't have a clear idea what's going on here. I
guess it's that the wait_for_stats() guarantees that the stats message
from the index insertion has been received but the status messages
from the "trunc" tables might not have gotten there yet.

That's what it looks like to me. I now think that the apparent
connection to parallel query is a mirage. The reason we've only
seen a few cases so far is that the flapping test is new: it
wad added in commit d42358efb16cc811, on 20 Feb. If we left it
as-is, I think we'd eventually see the same failure without forcing
parallel mode. In fact, that's pretty much what you describe below,
isn't it? The pg_sleep is sort of half-bakedly substituting for
a proper wait.

It was added on Feb 20 all right, but of *last year*. It's been there
working happily for a year now.

The reason I added the trunc test in the middle of the index update
tests is that I dislike tests that sleep for long without real purpose;
it seems pretty reasonable to me to have both sleeps actually be the
same wait.

Instead of adding another sleep function, another possibility is to add
two booleans, one for the index counter and another for the truncate
counters, and only terminate the sleep if both are true. I don't see
any reason to make this test any slower than it already is.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#73Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#71)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Fri, Mar 4, 2016 at 10:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Sure. If you have an idea what the right thing to do is, please go
ahead.

Yeah, I'll modify the patch and commit sometime later today.

OK, if you're basing that on the patch I sent upthread, please credit
Rahila Syed as the original author of that code. (I modified it
before posting, but only trivially.) Of course if you do something
else, then never mind.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#74Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#72)
Re: postgres_fdw vs. force_parallel_mode on ppc

Alvaro Herrera <alvherre@2ndquadrant.com> writes:

Tom Lane wrote:

That's what it looks like to me. I now think that the apparent
connection to parallel query is a mirage. The reason we've only
seen a few cases so far is that the flapping test is new: it
wad added in commit d42358efb16cc811, on 20 Feb.

It was added on Feb 20 all right, but of *last year*. It's been there
working happily for a year now.

Wup, you're right, failed to look closely enough at the commit log
entry. So that puts us back to wondering why exactly parallel query
is triggering this. Still, Robert's experiment with removing the
pg_sleep seems fairly conclusive: it is possible to get the failure
without parallel query.

Instead of adding another sleep function, another possibility is to add
two booleans, one for the index counter and another for the truncate
counters, and only terminate the sleep if both are true. I don't see
any reason to make this test any slower than it already is.

Well, that would make the function more complicated, but maybe it's a
better answer. On the other hand, we know that the stats updates are
delivered in a deterministic order, so why not simply replace the
existing test in the wait function with one that looks for the truncation
updates? If we've gotten those, we must have gotten the earlier ones.

In any case, the real answer to making the test less slow is to get rid of
that vestigial pg_sleep. I'm wondering why we failed to remove that when
we put in the wait_for_stats function...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#75Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#74)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Fri, Mar 4, 2016 at 11:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Alvaro Herrera <alvherre@2ndquadrant.com> writes:

Tom Lane wrote:

That's what it looks like to me. I now think that the apparent
connection to parallel query is a mirage. The reason we've only
seen a few cases so far is that the flapping test is new: it
wad added in commit d42358efb16cc811, on 20 Feb.

It was added on Feb 20 all right, but of *last year*. It's been there
working happily for a year now.

Wup, you're right, failed to look closely enough at the commit log
entry. So that puts us back to wondering why exactly parallel query
is triggering this. Still, Robert's experiment with removing the
pg_sleep seems fairly conclusive: it is possible to get the failure
without parallel query.

Instead of adding another sleep function, another possibility is to add
two booleans, one for the index counter and another for the truncate
counters, and only terminate the sleep if both are true. I don't see
any reason to make this test any slower than it already is.

Well, that would make the function more complicated, but maybe it's a
better answer. On the other hand, we know that the stats updates are
delivered in a deterministic order, so why not simply replace the
existing test in the wait function with one that looks for the truncation
updates? If we've gotten those, we must have gotten the earlier ones.

I'm not sure if that's actually true with parallel mode. I'm pretty
sure the earlier workers will have terminated before the later ones
start, but is that enough to guarantee that the stats collector sees
the messages in that order?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#76Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#75)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas wrote:

I'm not sure if that's actually true with parallel mode. I'm pretty
sure the earlier workers will have terminated before the later ones
start, but is that enough to guarantee that the stats collector sees
the messages in that order?

Um. So if you have two queries that run in sequence, it's possible
for workers of the first query to be still running when workers for the
second query finish? That would be very strange.

If that's not what you're saying, I don't understand what guarantees you
say we don't have.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#77Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#75)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

On Fri, Mar 4, 2016 at 11:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Well, that would make the function more complicated, but maybe it's a
better answer. On the other hand, we know that the stats updates are
delivered in a deterministic order, so why not simply replace the
existing test in the wait function with one that looks for the truncation
updates? If we've gotten those, we must have gotten the earlier ones.

I'm not sure if that's actually true with parallel mode. I'm pretty
sure the earlier workers will have terminated before the later ones
start, but is that enough to guarantee that the stats collector sees
the messages in that order?

Huh? Parallel workers are read-only; what would they be doing sending
any of these messages?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#78Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#77)
Re: postgres_fdw vs. force_parallel_mode on ppc

On Fri, Mar 4, 2016 at 11:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

On Fri, Mar 4, 2016 at 11:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Well, that would make the function more complicated, but maybe it's a
better answer. On the other hand, we know that the stats updates are
delivered in a deterministic order, so why not simply replace the
existing test in the wait function with one that looks for the truncation
updates? If we've gotten those, we must have gotten the earlier ones.

I'm not sure if that's actually true with parallel mode. I'm pretty
sure the earlier workers will have terminated before the later ones
start, but is that enough to guarantee that the stats collector sees
the messages in that order?

Huh? Parallel workers are read-only; what would they be doing sending
any of these messages?

Mumble. I have no idea what's happening here.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#79Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#78)
Re: postgres_fdw vs. force_parallel_mode on ppc

Robert Haas <robertmhaas@gmail.com> writes:

On Fri, Mar 4, 2016 at 11:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Huh? Parallel workers are read-only; what would they be doing sending
any of these messages?

Mumble. I have no idea what's happening here.

OK, after inserting a bunch of debug logging I have figured out what is
happening. The updates on trunc_stats_test et al, being updates, are
done in the session's main backend. But we also have these queries:

-- do a seqscan
SELECT count(*) FROM tenk2;
-- do an indexscan
SELECT count(*) FROM tenk2 WHERE unique1 = 1;

These can be, and are, done in parallel worker processes (and not
necessarily the same one, either). AFAICT, the parallel worker
processes send their stats messages to the stats collector more or
less immediately after processing their queries. However, because
of the rate-limiting logic in pgstat_report_stat, the main backend
doesn't. The point of that "pg_sleep(1.0)" (which was actually added
*after* wait_for_stats) is to ensure that the half-second delay in
the rate limiter has been soaked up, and the stats messages sent,
before we start waiting for the results to become visible in the
stats collector's output.

So the sequence of events when we get a failure looks like

1. parallel workers send stats updates for seqscan and indexscan
on tenk2.

2. stats collector emits output files, probably as a result of
an autovacuum request.

3. session's main backend finishes "pg_sleep(1.0)" and sends
stats updates for what it's done lately, including the
updates on trunc_stats_test et al.

4. wait_for_stats() observes that the tenk2 idx_scan count has
already advanced and figures it need not wait at all.

5. We print stale stats for trunc_stats_test et al.

So it appears to me that to make this robust, we need to adjust
wait_for_stats to verify advances on *all three of* the tenk2
seq_scan count, the tenk2 idx_scan count, and at least one of
the trunc_stats_test tables' counters, because those could be
coming from three different backend processes.

If we ever allow parallel workers to do writes, this will really
become a mess.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers