Builtin connection polling

Started by Konstantin Knizhnikalmost 8 years ago131 messages
#1Konstantin Knizhnik
k.knizhnik@postgrespro.ru
1 attachment(s)

Hi hackers,

My recent experiments with pthread version of Postgres show that
although pthread offers some performance advantages comparing with
processes for large number of connections, them still can not eliminate
need in connection pooling. Large number even of inactive connections
cause significant degrade of Postgres performance.

So we need connection pooling.  Most of enterprise systems working with
Postgres are using pgbouncer or similar tools.
But pgbouncer has the following drawbacks:
1. It is an extra entity which complicates system installation and
administration.
2. Pgbouncer itself can be a bottleneck and point of failure. For
example with enabled SSL, single threaded model of pgbouncer becomes
limiting factor when a lot of clients try to simultaneously reestablish
connection. This is why some companies are building hierarchy of pgbouncers.
3. Using pool_mode other than "session" makes it not possible to use
prepared statements and session variables.
Lack of prepared statements can itself decrease speed of simple queries
up to two times.

So I thought about built-in connection pooling for Postgres. Ideally it
should be integrated with pthreads, because in this case scheduling of
sessions can be done more flexible and easily.
But I decided to start with patch to vanilla Postgres.

Idea is the following:
1. We start some number of normal backends (which forms backend pool for
serving client sessions).
2. When number of connections exceeds number of backends, then instead
of spawning new backend we choose some of existed backend and redirect
connection to it.
There is more or less portable way in Unix to pass socket descriptors
between processes using Unix sockets:
for example
https://stackoverflow.com/questions/28003921/sending-file-descriptor-by-linux-socket/
(this is one of the places where pthreads Postgres will win). So a
session is bounded to a backend. Backends and chosen using round-robin
policy which should guarantee more or less unform distribution of
sessions between backends if number of sessions is much larger than
number of backends. But certainly skews in client application access
patterns can violate this assumption.
3. Rescheduling is done at transaction level. So it is enough to have
one entry in procarray for backend to correctly handle locks. Also
transaction level pooling eliminates
problem with false deadlocks (caused by lack of free executors in the
pool). Also transaction level pooling minimize changes in Postgres core
needed to maintain correct session context:
no need to suspend/resume transaction state, static variables, ....
4. In the main Postgres query loop in PostgresMain  we determine a
moment when backend is not in transaction state and perform select of
sockets of all active sessions and choose one of them.
5. When client is disconnected, then we close session but do not
terminate backend.
6. To support prepared statements, we append session identifier to the
name of the statement. So prepared statements of different sessions will
not interleave. As far as session is bounded to the backend, it is
possible to use prepared statements.

This is minimal plan for embedded session pooling I decided to implement
as prototype.

Several things are not addressed now:

1. Temporary tables. In principle them can be handled in the same way as
prepared statements: by concatenating session identifier to the name of
the table.
But it require adjusting references to this table in all queries. It is
much more complicated than in case of prepared statements.
2. Session level GUCs. In principle it is not difficult to remember GUCs
modified by session and save/restore them on session switch.
But it is just not implemented now.
3. Support of multiple users/databases/... It is the most critical
drawback. Right now my prototype implementation assumes that all clients
are connected to the same database
under the same user with some connection options. And it is a challenge
about which I want to know option of community. The name of the database
and user are retrieved from client connection by ProcessStartupPacket
function. In vanilla Posgres this function is executed by spawned
backend. So I do not know which database a client is going to access
before calling this function and reading data from the client's socket.
Now I just choose random backend and assign connection to this backend.
But it can happen that this backend is working with different
database/user. Now I just return error in this case. Certainly it is
possible to call ProcessStartupPacket at postmaster and then select
proper backend working with specified database/user.
But I afraid that postmaster can become bottleneck i this case,
especially in case of using SSL. Also larger number of databases/users
can significantly suffer efficiency of pooling if each backend will be
responsible only for database/user combination. May be backend should be
bounded only to the database and concrete role should be set on session
switch. But it can require flushing backend caches whichdevalues idea of
embedded session pooling. This problem can be easily solved with
multithreaded Postgres where it is possible to easily reassign session
to another thread.

Now results shown by my prototype. I used pgbench with scale factor 100
in readonly  mode (-S option).
Precise pgbench command is "pgbench -S -c N -M prepared -T 100 -P 1 -n".
Results in the table below are in kTPS:

Connections
Vanilla Postgres
Postgres with session pool size=10
10
186
181
100
118
224
1000
59
191

As you see instead of degrade of performance with increasing number of
connections, Postgres with session pool shows stable performance result.
Moreover, for vanilla Postgres best results at my system are obtained
for 10 connections, but Postgres with session pool shows better
performance for 100 connections with the same number of spawned backends.

My patch to the Postgres is attached to this mail.
To switch on session polling set session_pool_size to some non-zero
value. Another GUC variable which I have added is "max_sessions" which
specifies maximal number of sessions handled by backend. So total number
of handled client connections is session_pool_size*max_sessions.

Certainly it is just prototype far from practical use.
In addition to the challenges mentioned above, there are also some other
issues which should be considered:

1. Long living transaction in client application blocks all other
sessions in the backend and so can suspend work of the Postgres.
So Uber-style programming when database transaction is started with
opening door of a car and finished at the end of the trip is completely
not compatible with this approach.
2. Fatal errors cause disconnect not only of one client caused the
problem but bunch of client sessions scheduled to this backend.
3. It is possible to use PL-APIs, such as plpython, but session level
variables may not be used.
4. There may be some memory leaks caused by allocation of memory using
malloc or in top memory context which is expected to be freed on backend
exit.
But it is not deallocated at session close, so large number of handled
sessions can cause memory overflow.
5. Some applications, handling mutliple connections inside single thread
and multiplexing them at statement level (rather than on transaction
level) may not work correctly.
It seems to be quite exotic use case. But pgbench actually behaves in
this way! This is why attempt to start pgbench with multistatement
transactions (-N) will fail if number of threads (-j) is smaller than
number of connections (-c).
6. The approach with passing socket descriptors between processes was
implemented only for Unix and tested only at Linux, although is expected
to work also as MacOS and other Unix dialects. Windows is not supported now.

I will be glad to receive an feedback and suggestion concerning
perspectives of embedded connection pooling.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool.patchtext/x-patch; name=session_pool.patchDownload
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..a73d584 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,30 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..5b07a88 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,18 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_peekbyte		- peek at next byte from connection
+ *
+ *	 Same as pq_getbyte() except we don't advance the pointer.
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..7b36923
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,89 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock)
+{
+    struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+    struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..996a41c 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -182,6 +183,15 @@ typedef struct bkend
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
+/*
+ * Pointer in backend list used to implement round-robin distribution of sessions through backends.
+ * This variable either NULL, either points to the normal backend.
+ */
+static Backend*   BackendListClockPtr;
+/*
+ * Number of active normal backends
+ */
+static int        nNormalBackends;
 
 #ifdef EXEC_BACKEND
 static Backend *ShmemBackendArray;
@@ -412,7 +422,6 @@ static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
 static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -568,6 +577,22 @@ HANDLE		PostmasterHandle;
 #endif
 
 /*
+ * Move current backend pointer to the next normal backend.
+ * This function is called either when new session is started to implement round-robin policy, either when backend pointer by BackendListClockPtr is terminated
+ */
+static void AdvanceBackendListClockPtr(void)
+{
+	Backend* b = BackendListClockPtr;
+	do {
+		dlist_node* node = &b->elem;
+		node = node->next ? node->next : BackendList.head.next;
+		b = dlist_container(Backend, elem, node);
+	} while (b->bkend_type != BACKEND_TYPE_NORMAL && b != BackendListClockPtr);
+
+	BackendListClockPtr = (b != BackendListClockPtr) ? b : NULL;
+}
+
+/*
  * Postmaster main entry point
  */
 void
@@ -1944,8 +1969,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -2043,7 +2068,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2098,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2449,7 +2474,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	port->session_recv_sock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -3236,6 +3261,24 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL)
+	{
+		if (bp == BackendListClockPtr)
+			AdvanceBackendListClockPtr();
+		if (bp->session_send_sock != PGINVALID_SOCKET)
+			close(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+		nNormalBackends -= 1;
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3355,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3457,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4017,6 +4058,19 @@ BackendStartup(Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	int         session_pipe[2];
+
+	if (SessionPoolSize != 0 && nNormalBackends >= SessionPoolSize)
+	{
+		/* Instead of spawning new backend open new session at one of the existed backends. */
+		Assert(BackendListClockPtr && BackendListClockPtr->session_send_sock != PGINVALID_SOCKET);
+		elog(DEBUG2, "Start new session for socket %d at backend %d total %d", port->sock, BackendListClockPtr->pid, nNormalBackends);
+		if (pg_send_sock(BackendListClockPtr->session_send_sock, port->sock) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+		AdvanceBackendListClockPtr(); /* round-robin */
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
@@ -4030,7 +4084,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4116,23 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	if (SessionPoolSize != 0)
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (SessionPoolSize != 0)
+		{
+			port->session_recv_sock = session_pipe[0];
+			close(session_pipe[1]);
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4174,19 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	if (SessionPoolSize != 0)
+	{
+		bn->session_send_sock = session_pipe[1];
+		close(session_pipe[0]);
+	}
+	else
+		bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
-
+	if (BackendListClockPtr == NULL)
+		BackendListClockPtr = bn;
+	nNormalBackends += 1;
+	elog(DEBUG2, "Start backend %d total %d", pid, nNormalBackends);
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4373,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..9c42fab 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,7 +130,7 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
 static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
 #endif
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,6 +669,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
 	Assert(set->nevents < set->nevents_space);
@@ -690,8 +693,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,7 +732,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -727,6 +741,27 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,7 +809,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -827,14 +862,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..779ebc0 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -75,6 +75,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
 
@@ -169,6 +170,10 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet* SessionPool;
+static int64         SessionCount;
+static char*         CurrentSessionId;
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +199,15 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return strdup(buf);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -473,7 +487,7 @@ SocketBackend(StringInfo inBuf)
 			 * fatal because we have probably lost message boundary sync, and
 			 * there's no good way to recover.
 			 */
-			ereport(FATAL,
+		    ereport(FATAL,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("invalid frontend message type %d", qtype)));
 			break;
@@ -1232,6 +1246,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (CurrentSessionId && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSessionId, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1523,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (CurrentSessionId && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSessionId, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2351,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (CurrentSessionId && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSessionId, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3635,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3685,10 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session ID if use session pooling */
+	if (SessionPoolSize != 0)
+		CurrentSessionId = CreateSessionId();
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3818,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4104,102 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			if (MyProcPort && MyProcPort->session_recv_sock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				whereToSendOutput = DestRemote;
+				if (SessionPool == NULL)
+				{
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions);
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, NULL);
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->session_recv_sock, NULL, NULL);
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, NULL);
+				}
+			  Retry:
+				DoingCommandRead = true;
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto Retry;
+				}
+
+				if (ready_client.fd == MyProcPort->session_recv_sock)
+				{
+					int		 status;
+					Port     port;
+					Port*    myPort;
+					StringInfoData buf;
+					pgsocket sock = pg_recv_sock(MyProcPort->session_recv_sock);
+					if (sock < 0)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port.sock = sock;
+					myPort = MyProcPort;
+					MyProcPort = &port;
+					status = ProcessStartupPacket(&port, false, MessageContext);
+					MyProcPort = myPort;
+					if (strcmp(port.database_name, MyProcPort->database_name) ||
+						strcmp(port.user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port.database_name, port.user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK)
+					{
+						elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+							 sock, MyProcPid, port.database_name, port.user_name);
+						CurrentSessionId = CreateSessionId();
+						AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, CurrentSessionId);
+						MyProcPort->sock = sock;
+						send_ready_for_query = true;
+
+						SetCurrentStatementStartTimestamp();
+						StartTransactionCommand();
+						PerformAuthentication(MyProcPort);
+						CommitTransactionCommand();
+
+						/*
+						 * Send this backend's cancellation info to the frontend.
+						 */
+						pq_beginmessage(&buf, 'K');
+						pq_sendint32(&buf, (int32) MyProcPid);
+						pq_sendint32(&buf, (int32) MyCancelKey);
+						pq_endmessage(&buf);
+						/* Need not flush since ReadyForQuery will do it. */
+						continue;
+					}
+					elog(LOG, "Session startup failed");
+					close(sock);
+					goto Retry;
+				}
+				else
+				{
+					elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+					MyProcPort->sock = ready_client.fd;
+					CurrentSessionId = (char*)ready_client.user_data;
+				}
+			}
 		}
 
 		/*
@@ -4350,13 +4481,33 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+				if (SessionPool)
+				{
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+					pq_getmsgend(&input_message);
+
+					if (CurrentSessionId)
+					{
+						DropSessionPreparedStatements(CurrentSessionId);
+						free(CurrentSessionId);
+						CurrentSessionId = NULL;
+					}
+
+					close(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+
+					send_ready_for_query = true;
+					break;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
 			case 'c':			/* copy done */
 			case 'f':			/* copy fail */
 
-				/*
+				/*!
 				 * Accept but ignore these messages, per protocol spec; we
 				 * probably got here because a COPY failed, and the frontend
 				 * is still sending data.
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..b2f43a8 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,9 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..02373a3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,26 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			NULL
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			NULL
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 49cb263..f31f89b 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -127,7 +127,8 @@ typedef struct Port
 	int			remote_hostname_errcode;	/* see above */
 	char	   *remote_port;	/* text rep of remote port */
 	CAC_state	canAcceptConnections;	/* postmaster connection status */
-
+	pgsocket    session_recv_sock;   /* socket for receiving descriptor of new session sockets */
+	
 	/*
 	 * Information that needs to be saved from the startup packet and passed
 	 * into backend execution.  "char *" fields are NULL if not set.
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..a9f9228 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,8 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -420,6 +422,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..c14a20d 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
#2Ivan Novick
inovick@pivotal.io
In reply to: Konstantin Knizhnik (#1)
Re: Builtin connection polling

+1 to the concept... A lot of user could benefit if we did this in a good
way.

On Wed, Jan 17, 2018 at 8:09 AM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Hi hackers,

My recent experiments with pthread version of Postgres show that although
pthread offers some performance advantages comparing with processes for
large number of connections, them still can not eliminate need in
connection pooling. Large number even of inactive connections cause
significant degrade of Postgres performance.

So we need connection pooling. Most of enterprise systems working with
Postgres are using pgbouncer or similar tools.
But pgbouncer has the following drawbacks:
1. It is an extra entity which complicates system installation and
administration.
2. Pgbouncer itself can be a bottleneck and point of failure. For example
with enabled SSL, single threaded model of pgbouncer becomes limiting
factor when a lot of clients try to simultaneously reestablish connection.
This is why some companies are building hierarchy of pgbouncers.
3. Using pool_mode other than "session" makes it not possible to use
prepared statements and session variables.
Lack of prepared statements can itself decrease speed of simple queries up
to two times.

So I thought about built-in connection pooling for Postgres. Ideally it
should be integrated with pthreads, because in this case scheduling of
sessions can be done more flexible and easily.
But I decided to start with patch to vanilla Postgres.

Idea is the following:
1. We start some number of normal backends (which forms backend pool for
serving client sessions).
2. When number of connections exceeds number of backends, then instead of
spawning new backend we choose some of existed backend and redirect
connection to it.
There is more or less portable way in Unix to pass socket descriptors
between processes using Unix sockets:
for example https://stackoverflow.com/questions/28003921/sending-
file-descriptor-by-linux-socket/
(this is one of the places where pthreads Postgres will win). So a session
is bounded to a backend. Backends and chosen using round-robin policy which
should guarantee more or less unform distribution of sessions between
backends if number of sessions is much larger than number of backends. But
certainly skews in client application access patterns can violate this
assumption.
3. Rescheduling is done at transaction level. So it is enough to have one
entry in procarray for backend to correctly handle locks. Also transaction
level pooling eliminates
problem with false deadlocks (caused by lack of free executors in the
pool). Also transaction level pooling minimize changes in Postgres core
needed to maintain correct session context:
no need to suspend/resume transaction state, static variables, ....
4. In the main Postgres query loop in PostgresMain we determine a moment
when backend is not in transaction state and perform select of sockets of
all active sessions and choose one of them.
5. When client is disconnected, then we close session but do not terminate
backend.
6. To support prepared statements, we append session identifier to the
name of the statement. So prepared statements of different sessions will
not interleave. As far as session is bounded to the backend, it is possible
to use prepared statements.

This is minimal plan for embedded session pooling I decided to implement
as prototype.

Several things are not addressed now:

1. Temporary tables. In principle them can be handled in the same way as
prepared statements: by concatenating session identifier to the name of the
table.
But it require adjusting references to this table in all queries. It is
much more complicated than in case of prepared statements.
2. Session level GUCs. In principle it is not difficult to remember GUCs
modified by session and save/restore them on session switch.
But it is just not implemented now.
3. Support of multiple users/databases/... It is the most critical
drawback. Right now my prototype implementation assumes that all clients
are connected to the same database
under the same user with some connection options. And it is a challenge
about which I want to know option of community. The name of the database
and user are retrieved from client connection by ProcessStartupPacket
function. In vanilla Posgres this function is executed by spawned backend.
So I do not know which database a client is going to access before calling
this function and reading data from the client's socket. Now I just choose
random backend and assign connection to this backend. But it can happen
that this backend is working with different database/user. Now I just
return error in this case. Certainly it is possible to call
ProcessStartupPacket at postmaster and then select proper backend working
with specified database/user.
But I afraid that postmaster can become bottleneck i this case, especially
in case of using SSL. Also larger number of databases/users can
significantly suffer efficiency of pooling if each backend will be
responsible only for database/user combination. May be backend should be
bounded only to the database and concrete role should be set on session
switch. But it can require flushing backend caches which devalues idea of
embedded session pooling. This problem can be easily solved with
multithreaded Postgres where it is possible to easily reassign session to
another thread.

Now results shown by my prototype. I used pgbench with scale factor 100 in
readonly mode (-S option).
Precise pgbench command is "pgbench -S -c N -M prepared -T 100 -P 1 -n".
Results in the table below are in kTPS:

Connections
Vanilla Postgres
Postgres with session pool size=10
10
186
181
100
118
224
1000
59
191

As you see instead of degrade of performance with increasing number of
connections, Postgres with session pool shows stable performance result.
Moreover, for vanilla Postgres best results at my system are obtained for
10 connections, but Postgres with session pool shows better performance for
100 connections with the same number of spawned backends.

My patch to the Postgres is attached to this mail.
To switch on session polling set session_pool_size to some non-zero value.
Another GUC variable which I have added is "max_sessions" which specifies
maximal number of sessions handled by backend. So total number of handled
client connections is session_pool_size*max_sessions.

Certainly it is just prototype far from practical use.
In addition to the challenges mentioned above, there are also some other
issues which should be considered:

1. Long living transaction in client application blocks all other sessions
in the backend and so can suspend work of the Postgres.
So Uber-style programming when database transaction is started with
opening door of a car and finished at the end of the trip is completely not
compatible with this approach.
2. Fatal errors cause disconnect not only of one client caused the problem
but bunch of client sessions scheduled to this backend.
3. It is possible to use PL-APIs, such as plpython, but session level
variables may not be used.
4. There may be some memory leaks caused by allocation of memory using
malloc or in top memory context which is expected to be freed on backend
exit.
But it is not deallocated at session close, so large number of handled
sessions can cause memory overflow.
5. Some applications, handling mutliple connections inside single thread
and multiplexing them at statement level (rather than on transaction level)
may not work correctly.
It seems to be quite exotic use case. But pgbench actually behaves in this
way! This is why attempt to start pgbench with multistatement transactions
(-N) will fail if number of threads (-j) is smaller than number of
connections (-c).
6. The approach with passing socket descriptors between processes was
implemented only for Unix and tested only at Linux, although is expected to
work also as MacOS and other Unix dialects. Windows is not supported now.

I will be glad to receive an feedback and suggestion concerning
perspectives of embedded connection pooling.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Ivan Novick, Product Manager Pivotal Greenplum
inovick@pivotal.io -- (Mobile) 408-230-6491
https://www.youtube.com/GreenplumDatabase

#3Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#1)
1 attachment(s)
Re: Built-in connection pooling

On 17.01.2018 19:09, Konstantin Knizhnik wrote:

Hi hackers,

My recent experiments with pthread version of Postgres show that
although pthread offers some performance advantages comparing with
processes for large number of connections, them still can not
eliminate need in connection pooling. Large number even of inactive
connections cause significant degrade of Postgres performance.

So we need connection pooling.  Most of enterprise systems working
with Postgres are using pgbouncer or similar tools.
But pgbouncer has the following drawbacks:
1. It is an extra entity which complicates system installation and
administration.
2. Pgbouncer itself can be a bottleneck and point of failure. For
example with enabled SSL, single threaded model of pgbouncer becomes
limiting factor when a lot of clients try to simultaneously
reestablish connection. This is why some companies are building
hierarchy of pgbouncers.
3. Using pool_mode other than "session" makes it not possible to use
prepared statements and session variables.
Lack of prepared statements can itself decrease speed of simple
queries up to two times.

So I thought about built-in connection pooling for Postgres. Ideally
it should be integrated with pthreads, because in this case scheduling
of sessions can be done more flexible and easily.
But I decided to start with patch to vanilla Postgres.

Idea is the following:
1. We start some number of normal backends (which forms backend pool
for serving client sessions).
2. When number of connections exceeds number of backends, then instead
of spawning new backend we choose some of existed backend and redirect
connection to it.
There is more or less portable way in Unix to pass socket descriptors
between processes using Unix sockets:
for example
https://stackoverflow.com/questions/28003921/sending-file-descriptor-by-linux-socket/
(this is one of the places where pthreads Postgres will win). So a
session is bounded to a backend. Backends and chosen using round-robin
policy which should guarantee more or less unform distribution of
sessions between backends if number of sessions is much larger than
number of backends. But certainly skews in client application access
patterns can violate this assumption.
3. Rescheduling is done at transaction level. So it is enough to have
one entry in procarray for backend to correctly handle locks. Also
transaction level pooling eliminates
problem with false deadlocks (caused by lack of free executors in the
pool). Also transaction level pooling minimize changes in Postgres
core needed to maintain correct session context:
no need to suspend/resume transaction state, static variables, ....
4. In the main Postgres query loop in PostgresMain  we determine a
moment when backend is not in transaction state and perform select of
sockets of all active sessions and choose one of them.
5. When client is disconnected, then we close session but do not
terminate backend.
6. To support prepared statements, we append session identifier to the
name of the statement. So prepared statements of different sessions
will not interleave. As far as session is bounded to the backend, it
is possible to use prepared statements.

This is minimal plan for embedded session pooling I decided to
implement as prototype.

Several things are not addressed now:

1. Temporary tables. In principle them can be handled in the same way
as prepared statements: by concatenating session identifier to the
name of the table.
But it require adjusting references to this table in all queries. It
is much more complicated than in case of prepared statements.
2. Session level GUCs. In principle it is not difficult to remember
GUCs modified by session and save/restore them on session switch.
But it is just not implemented now.
3. Support of multiple users/databases/... It is the most critical
drawback. Right now my prototype implementation assumes that all
clients are connected to the same database
under the same user with some connection options. And it is a
challenge about which I want to know option of community. The name of
the database and user are retrieved from client connection by
ProcessStartupPacket function. In vanilla Posgres this function is
executed by spawned backend. So I do not know which database a client
is going to access before calling this function and reading data from
the client's socket. Now I just choose random backend and assign
connection to this backend. But it can happen that this backend is
working with different database/user. Now I just return error in this
case. Certainly it is possible to call ProcessStartupPacket at
postmaster and then select proper backend working with specified
database/user.
But I afraid that postmaster can become bottleneck i this case,
especially in case of using SSL. Also larger number of databases/users
can significantly suffer efficiency of pooling if each backend will be
responsible only for database/user combination. May be backend should
be bounded only to the database and concrete role should be set on
session switch. But it can require flushing backend caches
whichdevalues idea of embedded session pooling. This problem can be
easily solved with multithreaded Postgres where it is possible to
easily reassign session to another thread.

Now results shown by my prototype. I used pgbench with scale factor
100 in readonly  mode (-S option).
Precise pgbench command is "pgbench -S -c N -M prepared -T 100 -P 1
-n". Results in the table below are in kTPS:

Connections
Vanilla Postgres
Postgres with session pool size=10
10
186
181
100
118
224
1000
59
191

As you see instead of degrade of performance with increasing number of
connections, Postgres with session pool shows stable performance result.
Moreover, for vanilla Postgres best results at my system are obtained
for 10 connections, but Postgres with session pool shows better
performance for 100 connections with the same number of spawned backends.

My patch to the Postgres is attached to this mail.
To switch on session polling set session_pool_size to some non-zero
value. Another GUC variable which I have added is "max_sessions" which
specifies maximal number of sessions handled by backend. So total
number of handled client connections is session_pool_size*max_sessions.

Certainly it is just prototype far from practical use.
In addition to the challenges mentioned above, there are also some
other issues which should be considered:

1. Long living transaction in client application blocks all other
sessions in the backend and so can suspend work of the Postgres.
So Uber-style programming when database transaction is started with
opening door of a car and finished at the end of the trip is
completely not compatible with this approach.
2. Fatal errors cause disconnect not only of one client caused the
problem but bunch of client sessions scheduled to this backend.
3. It is possible to use PL-APIs, such as plpython, but session level
variables may not be used.
4. There may be some memory leaks caused by allocation of memory using
malloc or in top memory context which is expected to be freed on
backend exit.
But it is not deallocated at session close, so large number of handled
sessions can cause memory overflow.
5. Some applications, handling mutliple connections inside single
thread and multiplexing them at statement level (rather than on
transaction level) may not work correctly.
It seems to be quite exotic use case. But pgbench actually behaves in
this way! This is why attempt to start pgbench with multistatement
transactions (-N) will fail if number of threads (-j) is smaller than
number of connections (-c).
6. The approach with passing socket descriptors between processes was
implemented only for Unix and tested only at Linux, although is
expected to work also as MacOS and other Unix dialects. Windows is not
supported now.

I will be glad to receive an feedback and suggestion concerning
perspectives of embedded connection pooling.

--
Konstantin Knizhnik
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

Attached please find new version of the patch with few fixes.
And more results at NUMA system with 144 cores and 3Tb of RAM.

Read-only pgbench (-S):

#Connections\kTPS
Vanilla Postgres
Session pool size 256
1k
1300 1505
10k
633
1519
100k
- 1425

Read-write contention test: access to small number of records with 1% of
updates.

#Clients\TPS Vanilla Postgres Session pool size 256
100 557232 573319
200 520395 551670
300 511423 533773
400 468562 523091
500 442268 514056
600 401860 526704
700 363912 530317
800 325148 512238
900 301310 512844
1000 278829 554516

So, as you can see, there is no degrade of performance with increased number of connections in case of using session pooling.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-2.patchtext/x-patch; name=session_pool-2.patchDownload
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..a73d584 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,30 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..5b07a88 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,18 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_peekbyte		- peek at next byte from connection
+ *
+ *	 Same as pq_getbyte() except we don't advance the pointer.
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..7b36923
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,89 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock)
+{
+    struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+    struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..710e22c 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -182,6 +183,15 @@ typedef struct bkend
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
+/*
+ * Pointer in backend list used to implement round-robin distribution of sessions through backends.
+ * This variable either NULL, either points to the normal backend.
+ */
+static Backend*   BackendListClockPtr;
+/*
+ * Number of active normal backends
+ */
+static int        nNormalBackends;
 
 #ifdef EXEC_BACKEND
 static Backend *ShmemBackendArray;
@@ -412,7 +422,6 @@ static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
 static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -568,6 +577,22 @@ HANDLE		PostmasterHandle;
 #endif
 
 /*
+ * Move current backend pointer to the next normal backend.
+ * This function is called either when new session is started to implement round-robin policy, either when backend pointer by BackendListClockPtr is terminated
+ */
+static void AdvanceBackendListClockPtr(void)
+{
+	Backend* b = BackendListClockPtr;
+	do {
+		dlist_node* node = &b->elem;
+		node = node->next ? node->next : BackendList.head.next;
+		b = dlist_container(Backend, elem, node);
+	} while (b->bkend_type != BACKEND_TYPE_NORMAL && b != BackendListClockPtr);
+
+	BackendListClockPtr = b;
+}
+
+/*
  * Postmaster main entry point
  */
 void
@@ -1944,8 +1969,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -2043,7 +2068,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2098,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2449,7 +2474,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	port->session_recv_sock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -3236,6 +3261,24 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL)
+	{
+		if (bp == BackendListClockPtr)
+			AdvanceBackendListClockPtr();
+		if (bp->session_send_sock != PGINVALID_SOCKET)
+			close(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+		nNormalBackends -= 1;
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3355,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3457,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4017,6 +4058,19 @@ BackendStartup(Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	int         session_pipe[2];
+
+	if (SessionPoolSize != 0 && nNormalBackends >= SessionPoolSize)
+	{
+		/* Instead of spawning new backend open new session at one of the existed backends. */
+		Assert(BackendListClockPtr && BackendListClockPtr->session_send_sock != PGINVALID_SOCKET);
+		elog(DEBUG2, "Start new session for socket %d at backend %d total %d", port->sock, BackendListClockPtr->pid, nNormalBackends);
+		if (pg_send_sock(BackendListClockPtr->session_send_sock, port->sock) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+		AdvanceBackendListClockPtr(); /* round-robin */
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
@@ -4030,7 +4084,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4116,23 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	if (SessionPoolSize != 0)
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (SessionPoolSize != 0)
+		{
+			port->session_recv_sock = session_pipe[0];
+			close(session_pipe[1]);
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4174,19 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	if (SessionPoolSize != 0)
+	{
+		bn->session_send_sock = session_pipe[1];
+		close(session_pipe[0]);
+	}
+	else
+		bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
-
+	if (BackendListClockPtr == NULL)
+		BackendListClockPtr = bn;
+	nNormalBackends += 1;
+	elog(DEBUG2, "Start backend %d total %d", pid, nNormalBackends);
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4373,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..9c42fab 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,7 +130,7 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
 static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
 #endif
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,6 +669,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
 	Assert(set->nevents < set->nevents_space);
@@ -690,8 +693,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,7 +732,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -727,6 +741,27 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,7 +809,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -827,14 +862,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..ffc1494 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -75,6 +75,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
 
@@ -169,6 +170,10 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet* SessionPool;
+static int64         SessionCount;
+static char*         CurrentSessionId;
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +199,15 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return strdup(buf);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -473,7 +487,7 @@ SocketBackend(StringInfo inBuf)
 			 * fatal because we have probably lost message boundary sync, and
 			 * there's no good way to recover.
 			 */
-			ereport(FATAL,
+		    ereport(FATAL,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("invalid frontend message type %d", qtype)));
 			break;
@@ -1232,6 +1246,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (CurrentSessionId && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSessionId, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1523,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (CurrentSessionId && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSessionId, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2351,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (CurrentSessionId && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSessionId, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3635,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3685,10 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session ID if use session pooling */
+	if (SessionPoolSize != 0)
+		CurrentSessionId = CreateSessionId();
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3818,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4104,102 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			if (MyProcPort && MyProcPort->session_recv_sock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				whereToSendOutput = DestRemote;
+				if (SessionPool == NULL)
+				{
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions);
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, NULL);
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->session_recv_sock, NULL, NULL);
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, NULL);
+				}
+			  Retry:
+				DoingCommandRead = true;
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto Retry;
+				}
+
+				if (ready_client.fd == MyProcPort->session_recv_sock)
+				{
+					int		 status;
+					Port     port;
+					Port*    myPort;
+					StringInfoData buf;
+					pgsocket sock = pg_recv_sock(MyProcPort->session_recv_sock);
+					if (sock < 0)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port.sock = sock;
+					myPort = MyProcPort;
+					MyProcPort = &port;
+					status = ProcessStartupPacket(&port, false, MessageContext);
+					MyProcPort = myPort;
+					if (strcmp(port.database_name, MyProcPort->database_name) ||
+						strcmp(port.user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port.database_name, port.user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK)
+					{
+						elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+							 sock, MyProcPid, port.database_name, port.user_name);
+						CurrentSessionId = CreateSessionId();
+						AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, CurrentSessionId);
+						MyProcPort->sock = sock;
+						send_ready_for_query = true;
+
+						SetCurrentStatementStartTimestamp();
+						StartTransactionCommand();
+						PerformAuthentication(MyProcPort);
+						CommitTransactionCommand();
+
+						/*
+						 * Send this backend's cancellation info to the frontend.
+						 */
+						pq_beginmessage(&buf, 'K');
+						pq_sendint32(&buf, (int32) MyProcPid);
+						pq_sendint32(&buf, (int32) MyCancelKey);
+						pq_endmessage(&buf);
+						/* Need not flush since ReadyForQuery will do it. */
+						continue;
+					}
+					elog(LOG, "Session startup failed");
+					close(sock);
+					goto Retry;
+				}
+				else
+				{
+					elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+					MyProcPort->sock = ready_client.fd;
+					CurrentSessionId = (char*)ready_client.user_data;
+				}
+			}
 		}
 
 		/*
@@ -4350,13 +4481,36 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+				if (SessionPool)
+				{
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+					pq_getmsgend(&input_message);
+					if (pq_is_reading_msg())
+						pq_endmsgread();
+
+					if (CurrentSessionId)
+					{
+						DropSessionPreparedStatements(CurrentSessionId);
+						free(CurrentSessionId);
+						CurrentSessionId = NULL;
+					}
+
+					close(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+
+					send_ready_for_query = true;
+					break;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
 			case 'c':			/* copy done */
 			case 'f':			/* copy fail */
 
-				/*
+				/*!
 				 * Accept but ignore these messages, per protocol spec; we
 				 * probably got here because a COPY failed, and the frontend
 				 * is still sending data.
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..b2f43a8 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,9 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..02373a3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,26 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			NULL
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			NULL
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 49cb263..f31f89b 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -127,7 +127,8 @@ typedef struct Port
 	int			remote_hostname_errcode;	/* see above */
 	char	   *remote_port;	/* text rep of remote port */
 	CAC_state	canAcceptConnections;	/* postmaster connection status */
-
+	pgsocket    session_recv_sock;   /* socket for receiving descriptor of new session sockets */
+	
 	/*
 	 * Information that needs to be saved from the startup packet and passed
 	 * into backend execution.  "char *" fields are NULL if not set.
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..a9f9228 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,8 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -420,6 +422,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..c14a20d 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
#4Claudio Freire
klaussfreire@gmail.com
In reply to: Konstantin Knizhnik (#3)
Re: Built-in connection pooling

On Thu, Jan 18, 2018 at 11:48 AM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Attached please find new version of the patch with few fixes.
And more results at NUMA system with 144 cores and 3Tb of RAM.

Read-only pgbench (-S):

#Connections\kTPS
Vanilla Postgres
Session pool size 256
1k
1300 1505
10k
633
1519
100k
- 1425

Read-write contention test: access to small number of records with 1% of
updates.

#Clients\TPS Vanilla Postgres Session pool size 256
100 557232 573319
200 520395 551670
300 511423 533773
400 468562 523091
500 442268 514056
600 401860 526704
700 363912 530317
800 325148 512238
900 301310 512844
1000 278829 554516

So, as you can see, there is no degrade of performance with increased number of connections in case of using session pooling.

TBH, the tests you should be running are comparisons with a similar pool
size managed by pgbouncer, not just vanilla unlimited postgres.

Of course a limited pool size will beat thousands of concurrent queries by
a large margin. The real question is whether a pthread-based approach beats
the pgbouncer approach.

#5Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#3)
Re: Built-in connection pooling

Hi Konstantin,

On 01/18/2018 03:48 PM, Konstantin Knizhnik wrote:

On 17.01.2018 19:09, Konstantin Knizhnik wrote:

Hi hackers,

...

I haven't looked at the code yet, but after reading your message I have
a simple question - gow iss this going to work with SSL? If you're only
passing a file descriptor, that does not seem to be sufficient for the
backends to do crypto (that requires the SSL stuff from Port).

Maybe I'm missing something and it already works, though ...

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#6Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#5)
1 attachment(s)
Re: Built-in connection pooling

On 18.01.2018 18:02, Tomas Vondra wrote:

Hi Konstantin,

On 01/18/2018 03:48 PM, Konstantin Knizhnik wrote:

On 17.01.2018 19:09, Konstantin Knizhnik wrote:

Hi hackers,

...

I haven't looked at the code yet, but after reading your message I have
a simple question - gow iss this going to work with SSL? If you're only
passing a file descriptor, that does not seem to be sufficient for the
backends to do crypto (that requires the SSL stuff from Port).

Maybe I'm missing something and it already works, though ...

regards

Ooops, I missed this aspect with SSL. Thank you.
New version of the patch which correctly maintain session context is
attached.
Now each session has its own allocator which should be  used instead of
TopMemoryAllocator.
SSL connections work now.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-3.patchtext/x-patch; name=session_pool-3.patchDownload
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..a73d584 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,30 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..5b07a88 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,18 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_peekbyte		- peek at next byte from connection
+ *
+ *	 Same as pq_getbyte() except we don't advance the pointer.
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..7b36923
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,89 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock)
+{
+    struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+    struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..4586b57 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -182,6 +183,15 @@ typedef struct bkend
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
+/*
+ * Pointer in backend list used to implement round-robin distribution of sessions through backends.
+ * This variable either NULL, either points to the normal backend.
+ */
+static Backend*   BackendListClockPtr;
+/*
+ * Number of active normal backends
+ */
+static int        nNormalBackends;
 
 #ifdef EXEC_BACKEND
 static Backend *ShmemBackendArray;
@@ -412,7 +422,6 @@ static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
 static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -568,6 +577,22 @@ HANDLE		PostmasterHandle;
 #endif
 
 /*
+ * Move current backend pointer to the next normal backend.
+ * This function is called either when new session is started to implement round-robin policy, either when backend pointer by BackendListClockPtr is terminated
+ */
+static void AdvanceBackendListClockPtr(void)
+{
+	Backend* b = BackendListClockPtr;
+	do {
+		dlist_node* node = &b->elem;
+		node = node->next ? node->next : BackendList.head.next;
+		b = dlist_container(Backend, elem, node);
+	} while (b->bkend_type != BACKEND_TYPE_NORMAL && b != BackendListClockPtr);
+
+	BackendListClockPtr = b;
+}
+
+/*
  * Postmaster main entry point
  */
 void
@@ -1944,8 +1969,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -2043,7 +2068,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2098,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2449,7 +2474,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -3236,6 +3261,24 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL)
+	{
+		if (bp == BackendListClockPtr)
+			AdvanceBackendListClockPtr();
+		if (bp->session_send_sock != PGINVALID_SOCKET)
+			close(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+		nNormalBackends -= 1;
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3355,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3457,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4017,6 +4058,19 @@ BackendStartup(Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	int         session_pipe[2];
+
+	if (SessionPoolSize != 0 && nNormalBackends >= SessionPoolSize)
+	{
+		/* Instead of spawning new backend open new session at one of the existed backends. */
+		Assert(BackendListClockPtr && BackendListClockPtr->session_send_sock != PGINVALID_SOCKET);
+		elog(DEBUG2, "Start new session for socket %d at backend %d total %d", port->sock, BackendListClockPtr->pid, nNormalBackends);
+		if (pg_send_sock(BackendListClockPtr->session_send_sock, port->sock) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+		AdvanceBackendListClockPtr(); /* round-robin */
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
@@ -4030,7 +4084,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4116,23 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	if (SessionPoolSize != 0)
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (SessionPoolSize != 0)
+		{
+			SessionPoolSock = session_pipe[0];
+			close(session_pipe[1]);
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4174,19 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	if (SessionPoolSize != 0)
+	{
+		bn->session_send_sock = session_pipe[1];
+		close(session_pipe[0]);
+	}
+	else
+		bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
-
+	if (BackendListClockPtr == NULL)
+		BackendListClockPtr = bn;
+	nNormalBackends += 1;
+	elog(DEBUG2, "Start backend %d total %d", pid, nNormalBackends);
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4373,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..9c42fab 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,7 +130,7 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
 static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
 #endif
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,6 +669,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
 	Assert(set->nevents < set->nevents_space);
@@ -690,8 +693,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,7 +732,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -727,6 +741,27 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,7 +809,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -827,14 +862,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..f8abfd0 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -75,9 +75,17 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
 
+typedef struct SessionContext
+{
+	MemoryContext memory;
+	Port* port;
+	char* id;
+} SessionContext;
+
 /* ----------------
  *		global variables
  * ----------------
@@ -98,6 +106,8 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */ 
+pgsocket    SessionPoolSock = PGINVALID_SOCKET;
 
 
 /* ----------------
@@ -169,6 +179,11 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet*   SessionPool;
+static int64           SessionCount;
+static SessionContext* CurrentSession;
+static Port*           BackendPort;
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +209,22 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return pstrdup(buf);
+}
+
+static void DeleteSession(SessionContext* session)
+{
+	elog(LOG, "Delete session %p, id=%s,  memory context=%p", session, session->id, session->memory);
+	MemoryContextDelete(session->memory);
+	free(session);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1232,6 +1263,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (CurrentSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSession->id, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1540,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (CurrentSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSession->id, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2368,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (CurrentSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSession->id, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3652,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3702,21 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session ID if use session pooling */
+	if (SessionPoolSize != 0)
+	{
+		MemoryContext oldcontext;
+		CurrentSession = (SessionContext*)malloc(sizeof(SessionContext));
+		CurrentSession->memory = AllocSetContextCreate(TopMemoryContext,
+													   "SessionMemoryContext",
+													   ALLOCSET_DEFAULT_SIZES);
+		oldcontext = MemoryContextSwitchTo(CurrentSession->memory);
+		CurrentSession->id = CreateSessionId();
+		CurrentSession->port = MyProcPort;
+		BackendPort = MyProcPort;
+		MemoryContextSwitchTo(oldcontext);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3846,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4132,120 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			if (SessionPoolSock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				if (SessionPool == NULL)
+				{
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions);
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, CurrentSession);
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, CurrentSession);
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, SessionPoolSock, NULL, CurrentSession);
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, CurrentSession);
+				}
+			  ChooseSession:
+				DoingCommandRead = true;
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client.fd == SessionPoolSock)
+				{
+					int		 status;
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock < 0)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					session = (SessionContext*)malloc(sizeof(SessionContext));
+					session->memory = AllocSetContextCreate(TopMemoryContext,
+															"SessionMemoryContext",
+															ALLOCSET_DEFAULT_SIZES);
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					port = palloc(sizeof(Port));
+					memcpy(port, BackendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port->sock = sock;
+					session->port = port;
+					session->id = CreateSessionId();
+
+					MyProcPort = port;
+					status = ProcessStartupPacket(port, false, session->memory);
+					MemoryContextSwitchTo(oldcontext);
+
+					if (strcmp(port->database_name, MyProcPort->database_name) ||
+						strcmp(port->user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port->database_name, port->user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK)
+					{
+						elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+							 sock, MyProcPid, port->database_name, port->user_name);
+						CurrentSession = session;
+						AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, session);
+
+						SetCurrentStatementStartTimestamp();
+						StartTransactionCommand();
+						PerformAuthentication(MyProcPort);
+						CommitTransactionCommand();
+
+						BeginReportingGUCOptions();
+						/*
+						 * Send this backend's cancellation info to the frontend.
+						 */
+						pq_beginmessage(&buf, 'K');
+						pq_sendint32(&buf, (int32) MyProcPid);
+						pq_sendint32(&buf, (int32) MyCancelKey);
+						pq_endmessage(&buf);
+
+						/* Need not flush since ReadyForQuery will do it. */
+						send_ready_for_query = true;
+						continue;
+					}
+					else
+					{
+						DeleteSession(session);
+						elog(LOG, "Session startup failed");
+						close(sock);
+						goto ChooseSession;
+					}
+				}
+				else
+				{
+					elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+					CurrentSession = (SessionContext*)ready_client.user_data;
+					MyProcPort = CurrentSession->port;
+				}
+			}
 		}
 
 		/*
@@ -4350,6 +4527,29 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+				if (SessionPool)
+				{
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+					pq_getmsgend(&input_message);
+					if (pq_is_reading_msg())
+						pq_endmsgread();
+
+					close(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+					MyProcPort = NULL;
+
+					if (CurrentSession)
+					{
+						DropSessionPreparedStatements(CurrentSession->id);
+						DeleteSession(CurrentSession);
+						CurrentSession = NULL;
+					}
+					whereToSendOutput = DestRemote;
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..b2f43a8 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,9 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..02373a3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,26 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			NULL
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			NULL
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..a9f9228 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,8 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -420,6 +422,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..c14a20d 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..191eeaa 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -34,6 +34,7 @@ extern CommandDest whereToSendOutput;
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
#7Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Claudio Freire (#4)
Re: Built-in connection pooling

On 18.01.2018 18:00, Claudio Freire wrote:

On Thu, Jan 18, 2018 at 11:48 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

Attached please find new version of the patch with few fixes.
And more results at NUMA system with 144 cores and 3Tb of RAM.

Read-only pgbench (-S):

#Connections\kTPS
Vanilla Postgres
Session pool size 256
1k
1300 1505
10k
633
1519
100k
- 1425

Read-write contention test: access to small number of records with
1% of updates.

#Clients\TPS Vanilla Postgres Session pool size 256
100 557232 573319
200 520395 551670
300 511423 533773
400 468562 523091
500 442268 514056
600 401860 526704
700 363912 530317
800 325148 512238
900 301310 512844
1000 278829 554516

So, as you can see, there is no degrade of performance with increased number of connections in case of using session pooling.

TBH, the tests you should be running are comparisons with a similar
pool size managed by pgbouncer, not just vanilla unlimited postgres.

Of course a limited pool size will beat thousands of concurrent
queries by a large margin. The real question is whether a
pthread-based approach beats the pgbouncer approach.

Below are are results with pgbouncer:

#Connections\kTPS
Vanilla Postgres
Builti-in session pool size 256
Postgres + pgbouncer with transaction pooling mode and pool size  256
Postgres + 10 pgbouncers with pool size 20
1k
1300 1505
105
751
10k
633
1519
94
664
100k
- 1425
-
-

(-) here means that I failed to start such number of connections
(because of "resource temporary unavailable" and similar errors).

So single pgbouncer is 10 times slower than direct connection to the
postgres.
No surprise here: pgbouncer is snigle threaded and CPU usage for
pgbouncer is almost 100%.
So we have to launch several instances of pgbouncer and somehow
distribute load between them.
In Linux it is possible to use
REUSEPORT(https://lwn.net/Articles/542629/) to perform load balancing
between several pgbouncer instances.
But you have to edit pgbouncer code: it doesn't support such mode. So I
have started several instances of pgbouncer at different ports and
explicitly distribute several pgbench instances  between them.

But even in this case performance is twice slower than direct connection
and built-in session pooling.
It is because of lacked of prepared statements which I can not use with
pgbouncer in statement/transaction pooling mode.

Also please notice that with session pooling performance is better than
with vanilla Postgres.
It is because with session pooling we can open more connections with out
launching more backends.
It is especially noticeable at my local desktop with 4 cores: for normal
Postgres optimal number of connections is about 10. But with session
pooling 100 connections shows about 30% better result.

So, summarizing all above:

1. pgbouncer doesn't allows to use prepared statements and it cause up
to two times performance penalty.
2. pgbouncer is single threaded and can not efficiently handle more than
1k connections.
3. pgbouncer never can provide better performance than application
connected directly to Postgres with optimal number of connections. In
contrast session pooling can provide better performance than vanilla
Postgres with optimal number of connections.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#8Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#6)
Re: Built-in connection pooling

On 01/19/2018 10:52 AM, Konstantin Knizhnik wrote:

On 18.01.2018 18:02, Tomas Vondra wrote:

Hi Konstantin,

On 01/18/2018 03:48 PM, Konstantin Knizhnik wrote:

On 17.01.2018 19:09, Konstantin Knizhnik wrote:

Hi hackers,

...

I haven't looked at the code yet, but after reading your message I have
a simple question - gow iss this going to work with SSL? If you're only
passing a file descriptor, that does not seem to be sufficient for the
backends to do crypto (that requires the SSL stuff from Port).

Maybe I'm missing something and it already works, though ...

regards

Ooops, I missed this aspect with SSL. Thank you.
New version of the patch which correctly maintain session context is
attached.
Now each session has its own allocator which should be used instead
of TopMemoryAllocator. SSL connections work now.

OK. I've looked at the code, but I have a rather hard time understanding
it, because there are pretty much no comments explaining the intent of
the added code :-( I strongly suggest improving that, to help reviewers.

The questions I'm asking myself are mostly these:

1) When assigning a backend, we first try to get one from a pool, which
happens right at the beginning of BackendStartup. If we find a usable
backend, we send the info to the backend (pg_send_sock/pg_recv_sock).

But AFAICS this only only happens at connection time, right? But it your
initial message you say "Rescheduling is done at transaction level,"
which in my understanding means "transaction pooling". So, how does that
part work?

2) How does this deal with backends for different databases? I don't see
any checks that the requested database matches the backend database (not
any code switching the backend from one db to another - which would be
very tricky, I think).

3) Is there any sort of shrinking the pools? I mean, if the backend is
idle for certain period of time (or when we need backends for other
databases), does it get closed automatically?

Furthermore, I'm rather confused about the meaning of session_pool_size.
I mean, that GUC determines the number of backends in the pool, it has
nothing to do with sessions per se, right? Which would mean it's a bit
misleading to name it "session_..." (particularly if the pooling happens
at transaction level, not session level - which is question #1).

When I've been thinking about adding a built-in connection pool, my
rough plan was mostly "bgworker doing something like pgbouncer" (that
is, listening on a separate port and proxying everything to regular
backends). Obviously, that has pros and cons, and probably would not
work serve the threading use case well.

But it would have some features that I find valuable - for example, it's
trivial to decide which connection requests may or may not be served
from a pool (by connection to the main port or pool port).

That is not to say the bgworker approach is better than what you're
proposing, but I wonder if that would be possible with your approach.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#9Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#8)
Re: Built-in connection pooling

On 19.01.2018 18:53, Tomas Vondra wrote:

On 01/19/2018 10:52 AM, Konstantin Knizhnik wrote:

On 18.01.2018 18:02, Tomas Vondra wrote:

Hi Konstantin,

On 01/18/2018 03:48 PM, Konstantin Knizhnik wrote:

On 17.01.2018 19:09, Konstantin Knizhnik wrote:

Hi hackers,

...

I haven't looked at the code yet, but after reading your message I have
a simple question - gow iss this going to work with SSL? If you're only
passing a file descriptor, that does not seem to be sufficient for the
backends to do crypto (that requires the SSL stuff from Port).

Maybe I'm missing something and it already works, though ...

regards

Ooops, I missed this aspect with SSL. Thank you.
New version of the patch which correctly maintain session context is
attached.
Now each session has its own allocator which should be used instead
of TopMemoryAllocator. SSL connections work now.

OK. I've looked at the code, but I have a rather hard time understanding
it, because there are pretty much no comments explaining the intent of
the added code :-( I strongly suggest improving that, to help reviewers.

Sorry, sorry, sorry...
There are some comments and I will add more.

The questions I'm asking myself are mostly these:

1) When assigning a backend, we first try to get one from a pool, which
happens right at the beginning of BackendStartup. If we find a usable
backend, we send the info to the backend (pg_send_sock/pg_recv_sock).

But AFAICS this only only happens at connection time, right? But it your
initial message you say "Rescheduling is done at transaction level,"
which in my understanding means "transaction pooling". So, how does that
part work?

Here it is:

              ChooseSession:
                DoingCommandRead = true;
                /* Select which client session is ready to send new
query */
                if (WaitEventSetWait(SessionPool, -1, &ready_client, 1,
PG_WAIT_CLIENT) != 1)
                ...
                if (ready_client.fd == SessionPoolSock)
               {
                    /* Here we handle case of attaching new session */
                    ...
                }
                else /* and here we handle case when there is query
(new transaction) from some client */
                {
                    elog(DEBUG2, "Switch to session %d in backend %d",
ready_client.fd, MyProcPid);
                    CurrentSession =
(SessionContext*)ready_client.user_data;
                    MyProcPort = CurrentSession->port;
                }

2) How does this deal with backends for different databases? I don't see
any checks that the requested database matches the backend database (not
any code switching the backend from one db to another - which would be
very tricky, I think).

As I wrote in the initial mail this problem is not handled now.
It is expected that all clients are connected to the same database using
the same user.
I only check and report an error if this assumption is violated.
Definitely it should be fixed. And it is one of the main challenge with
this approach! And I want to receive some advices from community about
the best ways of solving it.
The problem is that we get information about database/user in
ProcessStartupPackage function in the beackend, when session is already
assigned to the particular backend.
We either have to somehow redirect session to some other backend
(somehow notify postmaster that we are not able to handle it)?
either obtain database/user name in postmaster. But it meas that
ProcessStartupPackage should be called in postmaster and Postmaster has
to read from client's socket.
I afraid that postmaster can be a bottleneck in this case.

The problem can be much easily solved in case of using pthread version
of Postgres. In this case reassigning session to another executor
(thread) can be don much easily.
And there is no need to use unportable trick with passing fiel
descriptor to other process.
And in future I am going to combine them. The problem is that pthread
version of Postgres is still in very raw state.

3) Is there any sort of shrinking the pools? I mean, if the backend is
idle for certain period of time (or when we need backends for other
databases), does it get closed automatically?

When client is disconnected, client session is closed. But backen is not
terminated even if there are no more sessions at this backend.
It  was done intentionally, to avoid permanent spawning of new processes
when there is one or few clients which frequently connect/disconnect to
the database.

Furthermore, I'm rather confused about the meaning of session_pool_size.
I mean, that GUC determines the number of backends in the pool, it has
nothing to do with sessions per se, right? Which would mean it's a bit
misleading to name it "session_..." (particularly if the pooling happens
at transaction level, not session level - which is question #1).

Yehh, yes it is not right name. It means maximal number of backends
which should be used to serve client's sessions.
But "max backends" is already used and has completely different meaning.

When I've been thinking about adding a built-in connection pool, my
rough plan was mostly "bgworker doing something like pgbouncer" (that
is, listening on a separate port and proxying everything to regular
backends). Obviously, that has pros and cons, and probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process will not
be able to handle all connections...
Certainly it is possible to start several such scheduling bgworkers...
But in any case it is more efficient to multiplex session in backend
themselves.

But it would have some features that I find valuable - for example, it's
trivial to decide which connection requests may or may not be served
from a pool (by connection to the main port or pool port).

That is not to say the bgworker approach is better than what you're
proposing, but I wonder if that would be possible with your approach.

regards

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#10Pavel Stehule
pavel.stehule@gmail.com
In reply to: Konstantin Knizhnik (#9)
Re: Built-in connection pooling

When I've been thinking about adding a built-in connection pool, my

rough plan was mostly "bgworker doing something like pgbouncer" (that
is, listening on a separate port and proxying everything to regular
backends). Obviously, that has pros and cons, and probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process will not
be able to handle all connections...
Certainly it is possible to start several such scheduling bgworkers... But
in any case it is more efficient to multiplex session in backend themselves.

pgbouncer hold all time client connect. When we implement the listeners,
then all work can be done by worker processes not by listeners.

Regards

Pavel

Show quoted text

But it would have some features that I find valuable - for example, it's

trivial to decide which connection requests may or may not be served
from a pool (by connection to the main port or pool port).

That is not to say the bgworker approach is better than what you're
proposing, but I wonder if that would be possible with your approach.

regards

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#11Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Pavel Stehule (#10)
Re: Built-in connection pooling

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in connection
pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything to
regular
backends). Obviously, that has pros and cons, and probably
would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process
will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to multiplex
session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only  once at
the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is 100% and
it is mostly in kernel space: top of profile are kernel functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get speed more
than 100k, while at powerful NUMA system it is possible to achieve
millions of TPS.
It is illustrated by the results I have sent in the previous mail: by
spawning 10 instances of pgbouncer I was able to receive 7 times bigger
speed.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#12Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#9)
Re: Built-in connection pooling

On 01/19/2018 05:17 PM, Konstantin Knizhnik wrote:

On 19.01.2018 18:53, Tomas Vondra wrote:

...

The questions I'm asking myself are mostly these:

1) When assigning a backend, we first try to get one from a pool, which
happens right at the beginning of BackendStartup. If we find a usable
backend, we send the info to the backend (pg_send_sock/pg_recv_sock).

But AFAICS this only only happens at connection time, right? But it your
initial message you say "Rescheduling is done at transaction level,"
which in my understanding means "transaction pooling". So, how does that
part work?

Here it is:

              ChooseSession:
...

OK, thanks.

2) How does this deal with backends for different databases? I
don't see any checks that the requested database matches the
backend database (not any code switching the backend from one db to
another - which would be very tricky, I think).

As I wrote in the initial mail this problem is not handled now.
It is expected that all clients are connected to the same database using
the same user.
I only check and report an error if this assumption is violated.
Definitely it should be fixed. And it is one of the main challenge with
this approach! And I want to receive some advices from community about
the best ways of solving it.
The problem is that we get information about database/user in
ProcessStartupPackage function in the beackend, when session is already
assigned to the particular backend.
We either have to somehow redirect session to some other backend
(somehow notify postmaster that we are not able to handle it)?
either obtain database/user name in postmaster. But it meas that
ProcessStartupPackage should be called in postmaster and Postmaster has
to read from client's socket.
I afraid that postmaster can be a bottleneck in this case.

Hmmm, that's unfortunate. I guess you'll have process the startup packet
in the main process, before it gets forked. At least partially.

The problem can be much easily solved in case of using pthread version
of Postgres. In this case reassigning session to another executor
(thread) can be don much easily.
And there is no need to use unportable trick with passing fiel
descriptor to other process.
And in future I am going to combine them. The problem is that pthread
version of Postgres is still in very raw state.

Yeah. Unfortunately, we're using processes now, and switching to threads
will take time (assuming it happens at all).

3) Is there any sort of shrinking the pools? I mean, if the backend is
idle for certain period of time (or when we need backends for other
databases), does it get closed automatically?

When client is disconnected, client session is closed. But backen is not
terminated even if there are no more sessions at this backend.
It  was done intentionally, to avoid permanent spawning of new processes
when there is one or few clients which frequently connect/disconnect to
the database.

Sure, but it means a short peak will exhaust the backends indefinitely.
That's acceptable for a PoC, but I think needs to be fixed eventually.

Furthermore, I'm rather confused about the meaning of session_pool_size.
I mean, that GUC determines the number of backends in the pool, it has
nothing to do with sessions per se, right? Which would mean it's a bit
misleading to name it "session_..." (particularly if the pooling happens
at transaction level, not session level - which is question #1).

Yehh, yes it is not right name. It means maximal number of backends
which should be used to serve client's sessions.
But "max backends" is already used and has completely different meaning.

When I've been thinking about adding a built-in connection pool, my
rough plan was mostly "bgworker doing something like pgbouncer" (that
is, listening on a separate port and proxying everything to regular
backends). Obviously, that has pros and cons, and probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process will not
be able to handle all connections...
Certainly it is possible to start several such scheduling bgworkers...
But in any case it is more efficient to multiplex session in backend
themselves.

Well, I haven't said it has to be single-threaded like pgbouncer. I
don't see why the bgworker could not use multiple threads internally (of
course, it'd need to be not to mess the stuff that is not thread-safe).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#13Pavel Stehule
pavel.stehule@gmail.com
In reply to: Konstantin Knizhnik (#11)
Re: Built-in connection pooling

2018-01-19 17:53 GMT+01:00 Konstantin Knizhnik <k.knizhnik@postgrespro.ru>:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in connection pool, my

rough plan was mostly "bgworker doing something like pgbouncer" (that
is, listening on a separate port and proxying everything to regular
backends). Obviously, that has pros and cons, and probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process will not
be able to handle all connections...
Certainly it is possible to start several such scheduling bgworkers...
But in any case it is more efficient to multiplex session in backend
themselves.

pgbouncer hold all time client connect. When we implement the listeners,
then all work can be done by worker processes not by listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only once at the
beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is 100% and
it is mostly in kernel space: top of profile are kernel functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several hundreds
of connection switches per second.
So with single multiplexing thread or process you can not get speed more
than 100k, while at powerful NUMA system it is possible to achieve millions
of TPS.
It is illustrated by the results I have sent in the previous mail: by
spawning 10 instances of pgbouncer I was able to receive 7 times bigger
speed.

pgbouncer is proxy sw. I don't think so native pooler should be proxy too.
So the compare pgbouncer with hypothetical native pooler is not fair,
because pgbouncer pass all communication

Regards

Pavel

Show quoted text

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#14Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#11)
Re: Built-in connection pooling

On 01/19/2018 05:53 PM, Konstantin Knizhnik wrote:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in connection
pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything to
regular
backends). Obviously, that has pros and cons, and probably
would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process
will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to multiplex
session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only  once at
the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is 100% and
it is mostly in kernel space: top of profile are kernel functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get speed more
than 100k, while at powerful NUMA system it is possible to achieve
millions of TPS.
It is illustrated by the results I have sent in the previous mail: by
spawning 10 instances of pgbouncer I was able to receive 7 times bigger
speed.

AFAICS making pgbouncer multi-threaded would not be hugely complicated.
A simple solution would be a fixed number of worker threads, and client
connections randomly assigned to them.

But this generally is not a common bottleneck in practical workloads (of
course, YMMV).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#15Claudio Freire
klaussfreire@gmail.com
In reply to: Konstantin Knizhnik (#11)
Re: Built-in connection pooling

On Fri, Jan 19, 2018 at 1:53 PM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in connection pool, my

rough plan was mostly "bgworker doing something like pgbouncer" (that
is, listening on a separate port and proxying everything to regular
backends). Obviously, that has pros and cons, and probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one process will not
be able to handle all connections...
Certainly it is possible to start several such scheduling bgworkers...
But in any case it is more efficient to multiplex session in backend
themselves.

pgbouncer hold all time client connect. When we implement the listeners,
then all work can be done by worker processes not by listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only once at the
beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is 100% and
it is mostly in kernel space: top of profile are kernel functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several hundreds
of connection switches per second.
So with single multiplexing thread or process you can not get speed more
than 100k, while at powerful NUMA system it is possible to achieve millions
of TPS.
It is illustrated by the results I have sent in the previous mail: by
spawning 10 instances of pgbouncer I was able to receive 7 times bigger
speed.

I'm sure pgbouncer can be improved. I've seen async code handle millions of
packets per second (zmq), pgbouncer shouldn't be radically different.

#16Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Claudio Freire (#15)
Re: Built-in connection pooling

On 01/19/2018 06:03 PM, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 1:53 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in connection
pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything
to regular
backends). Obviously, that has pros and cons, and probably
would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one
process will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to multiplex
session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by
listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only  once
at the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is 100%
and it is mostly in kernel space: top of profile are kernel functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get speed
more than 100k, while at powerful NUMA system it is possible to
achieve millions of TPS.
It is illustrated by the results I have sent in the previous mail:
by spawning 10 instances of pgbouncer I was able to receive 7 times
bigger speed.

I'm sure pgbouncer can be improved. I've seen async code handle millions
of packets per second (zmq), pgbouncer shouldn't be radically different.

The trouble is pgbouncer is not handling individual packets. It needs to
do additional processing to assemble the messages, understand the state
of the connection (e.g. to do transaction pooling) etc. Or handle SSL.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#17Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#12)
Re: Built-in connection pooling

On 19.01.2018 19:59, Tomas Vondra wrote:

The problem can be much easily solved in case of using pthread version

of Postgres. In this case reassigning session to another executor
(thread) can be don much easily.
And there is no need to use unportable trick with passing fiel
descriptor to other process.
And in future I am going to combine them. The problem is that pthread
version of Postgres is still in very raw state.

Yeah. Unfortunately, we're using processes now, and switching to threads
will take time (assuming it happens at all).

I have to agree with you.

3) Is there any sort of shrinking the pools? I mean, if the backend is
idle for certain period of time (or when we need backends for other
databases), does it get closed automatically?

When client is disconnected, client session is closed. But backen is not
terminated even if there are no more sessions at this backend.
It  was done intentionally, to avoid permanent spawning of new processes
when there is one or few clients which frequently connect/disconnect to
the database.

Sure, but it means a short peak will exhaust the backends indefinitely.
That's acceptable for a PoC, but I think needs to be fixed eventually.

Sorry, I do not understand it.
You specify size of backends pool which will server client session.
Size of this pool is chosen to provide the best performance at the
particular system and workload.
So number of backends will never exceed this optimal value even in case
of "short peak".
From my point of view terminating backends when there are no active
sessions is wrong idea in any case, it was not temporary decision just
for PoC.

Well, I haven't said it has to be single-threaded like pgbouncer. I
don't see why the bgworker could not use multiple threads internally (of
course, it'd need to be not to mess the stuff that is not thread-safe).

Certainly architecture with N multiple scheduling bgworkers and M
executors (backends) may be more flexible
than solution when scheduling is done in executor itself. But we will
have to pay extra cost for redirection.
I am not sure that finally it will allow to reach better performance.
More flexible solution in many cases doesn't mean more efficient solution.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#18Claudio Freire
klaussfreire@gmail.com
In reply to: Tomas Vondra (#16)
Re: Built-in connection pooling

On Fri, Jan 19, 2018 at 2:06 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:

On 01/19/2018 06:03 PM, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 1:53 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in connection
pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything
to regular
backends). Obviously, that has pros and cons, and probably
would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one
process will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to multiplex
session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by
listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only once
at the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is 100%
and it is mostly in kernel space: top of profile are kernel

functions).

The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get speed
more than 100k, while at powerful NUMA system it is possible to
achieve millions of TPS.
It is illustrated by the results I have sent in the previous mail:
by spawning 10 instances of pgbouncer I was able to receive 7 times
bigger speed.

I'm sure pgbouncer can be improved. I've seen async code handle millions
of packets per second (zmq), pgbouncer shouldn't be radically different.

The trouble is pgbouncer is not handling individual packets. It needs to
do additional processing to assemble the messages, understand the state
of the connection (e.g. to do transaction pooling) etc. Or handle SSL.

I understand. But zmq also has to process framing very similar to the fe
protocol, so I'm still hopeful.

#19Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Pavel Stehule (#13)
Re: Built-in connection pooling

On 19.01.2018 20:01, Pavel Stehule wrote:

2018-01-19 17:53 GMT+01:00 Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in
connection pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything
to regular
backends). Obviously, that has pros and cons, and
probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one
process will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to
multiplex session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by
listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only 
once at the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is
100% and it is mostly in kernel space: top of profile are kernel
functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get
speed more than 100k, while at powerful NUMA system it is possible
to achieve millions of TPS.
It is illustrated by the results I have sent in the previous mail:
by spawning 10 instances of pgbouncer I was able to receive 7
times bigger speed.

pgbouncer is proxy sw. I don't think so native pooler should be proxy
too. So the compare pgbouncer with hypothetical native pooler is not
fair, because pgbouncer pass all communication

If we will have separate scheduling bgworker(s) as Tomas proposed, then
in any case we will have to do some kind of redirection.
It can be done in more efficient way than using Unix sockets (as it is
in case of locally installed pgbouncer), but even if we use shared
memory queue then
performance will be comparable and limited by number of context
switches. It is possible to increase it by combining several requests
into one parcel.
But it even more complicate communication protocol between clients,
scheduling proxies and executors.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#20Claudio Freire
klaussfreire@gmail.com
In reply to: Konstantin Knizhnik (#17)
Re: Built-in connection pooling

On Fri, Jan 19, 2018 at 2:07 PM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Well, I haven't said it has to be single-threaded like pgbouncer. I
don't see why the bgworker could not use multiple threads internally (of
course, it'd need to be not to mess the stuff that is not thread-safe).

Certainly architecture with N multiple scheduling bgworkers and M
executors (backends) may be more flexible
than solution when scheduling is done in executor itself. But we will have
to pay extra cost for redirection.
I am not sure that finally it will allow to reach better performance.
More flexible solution in many cases doesn't mean more efficient solution.

I think you can take the best of both worlds.

You can take your approach of passing around fds, and build a "load
balancing protocol" in a bgworker.

The postmaster sends the socket to the bgworker, the bgworker waits for a
command as pgbouncer does, but instead of proxying everything, when
commands arrive, it passes the socket to a backend to handle.

That way, the bgworker can do what pgbouncer does, handle different pooling
modes, match backends to databases, etc, but it doesn't have to proxy all
data, it just delegates handling of a command to a backend, and forgets
about that socket.

Sounds like it could work.

#21Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Claudio Freire (#15)
Re: Built-in connection pooling

On 19.01.2018 20:03, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 1:53 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in
connection pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything
to regular
backends). Obviously, that has pros and cons, and
probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one
process will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to
multiplex session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by
listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only 
once at the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is
100% and it is mostly in kernel space: top of profile are kernel
functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get
speed more than 100k, while at powerful NUMA system it is possible
to achieve millions of TPS.
It is illustrated by the results I have sent in the previous mail:
by spawning 10 instances of pgbouncer I was able to receive 7
times bigger speed.

I'm sure pgbouncer can be improved. I've seen async code handle
millions of packets per second (zmq), pgbouncer shouldn't be radically
different.

With pgbouncer you will never be able to use prepared statements which
slows down simple queries almost twice (unless my patch with
autoprepared statements is committed).

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#22Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#17)
Re: Built-in connection pooling

On 01/19/2018 06:07 PM, Konstantin Knizhnik wrote:

...

3) Is there any sort of shrinking the pools? I mean, if the backend is
idle for certain period of time (or when we need backends for other
databases), does it get closed automatically?

When client is disconnected, client session is closed. But backen is not
terminated even if there are no more sessions at this backend.
It  was done intentionally, to avoid permanent spawning of new processes
when there is one or few clients which frequently connect/disconnect to
the database.

Sure, but it means a short peak will exhaust the backends indefinitely.
That's acceptable for a PoC, but I think needs to be fixed eventually.

Sorry, I do not understand it.
You specify size of backends pool which will server client session.
Size of this pool is chosen to provide the best performance at the
particular system and workload.
So number of backends will never exceed this optimal value even in case
of "short peak".
From my point of view terminating backends when there are no active
sessions is wrong idea in any case, it was not temporary decision just
for PoC.

That is probably true when there is just a single pool (for one
database/user). But when there are multiple such pools, it forces you to
keep the sum(pool_size) below max_connections. Which seems strange.

I do think the ability to evict backends after some timeout, or when
there is pressure in other pools (different user/database) is rather useful.

Well, I haven't said it has to be single-threaded like pgbouncer. I
don't see why the bgworker could not use multiple threads internally (of
course, it'd need to be not to mess the stuff that is not thread-safe).

Certainly architecture with N multiple scheduling bgworkers and M
executors (backends) may be more flexible
than solution when scheduling is done in executor itself. But we will
have to pay extra cost for redirection.

I am not sure that finally it will allow to reach better performance.
More flexible solution in many cases doesn't mean more efficient solution.

Sure, I wasn't really suggesting it's a clear win. I was responding to
your argument that pgbouncer in some cases reaches 100% CPU utilization
- that can be mitigated to a large extent by adding threads. Of course,
the cost for extra level of indirection is not zero.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#23Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Claudio Freire (#20)
Re: Built-in connection pooling

On 01/19/2018 06:13 PM, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 2:07 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

Well, I haven't said it has to be single-threaded like pgbouncer. I
don't see why the bgworker could not use multiple threads
internally (of
course, it'd need to be not to mess the stuff that is not
thread-safe).

Certainly architecture with N multiple scheduling bgworkers and M
executors (backends) may be more flexible
than solution when scheduling is done in executor itself. But we
will have to pay extra cost for redirection.
I am not sure that finally it will allow to reach better performance.
More flexible solution in many cases doesn't mean more efficient
solution.

I think you can take the best of both worlds.

You can take your approach of passing around fds, and build a "load
balancing protocol" in a bgworker.

The postmaster sends the socket to the bgworker, the bgworker waits for
a command as pgbouncer does, but instead of proxying everything, when
commands arrive, it passes the socket to a backend to handle.

That way, the bgworker can do what pgbouncer does, handle different
pooling modes, match backends to databases, etc, but it doesn't have to
proxy all data, it just delegates handling of a command to a backend,
and forgets about that socket.

Sounds like it could work.

How could it do all that without actually processing all the data? For
example, how could it determine the statement/transaction boundaries?

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#24Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#21)
Re: Built-in connection pooling

On 01/19/2018 06:19 PM, Konstantin Knizhnik wrote:

On 19.01.2018 20:03, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 1:53 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

On 19.01.2018 19:28, Pavel Stehule wrote:

When I've been thinking about adding a built-in
connection pool, my
rough plan was mostly "bgworker doing something like
pgbouncer" (that
is, listening on a separate port and proxying everything
to regular
backends). Obviously, that has pros and cons, and
probably would not
work serve the threading use case well.

And we will get the same problem as with pgbouncer: one
process will not be able to handle all connections...
Certainly it is possible to start several such scheduling
bgworkers... But in any case it is more efficient to
multiplex session in backend themselves.

pgbouncer hold all time client connect. When we implement the
listeners, then all work can be done by worker processes not by
listeners.

Sorry, I do not understand your point.
In my case pgbench establish connection to the pgbouncer only 
once at the beginning of the test.
And pgbouncer spends all time in context switches (CPU usage is
100% and it is mostly in kernel space: top of profile are kernel
functions).
The same picture will be if instead of pgbouncer you will do such
scheduling in one bgworker.
For the modern systems are not able to perform more than several
hundreds of connection switches per second.
So with single multiplexing thread or process you can not get
speed more than 100k, while at powerful NUMA system it is possible
to achieve millions of TPS.
It is illustrated by the results I have sent in the previous mail:
by spawning 10 instances of pgbouncer I was able to receive 7
times bigger speed.

I'm sure pgbouncer can be improved. I've seen async code handle
millions of packets per second (zmq), pgbouncer shouldn't be radically
different.

With pgbouncer you will never be able to use prepared statements which
slows down simple queries almost twice (unless my patch with
autoprepared statements is committed).

I don't see why that wouldn't be possible? Perhaps not for prepared
statements with simple protocol, but I'm pretty sure it's doable for
extended protocol (which seems like a reasonable limitation).

That being said, I think it's a mistake to turn this thread into a
pgbouncer vs. the world battle. I could name things that are possible
only with standalone connection pool - e.g. pausing connections and
restarting the database without interrupting the clients.

But that does not mean built-in connection pool is not useful.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#25Claudio Freire
klaussfreire@gmail.com
In reply to: Tomas Vondra (#23)
Re: Built-in connection pooling

On Fri, Jan 19, 2018 at 2:22 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:

On 01/19/2018 06:13 PM, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 2:07 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

Well, I haven't said it has to be single-threaded like

pgbouncer. I

don't see why the bgworker could not use multiple threads
internally (of
course, it'd need to be not to mess the stuff that is not
thread-safe).

Certainly architecture with N multiple scheduling bgworkers and M
executors (backends) may be more flexible
than solution when scheduling is done in executor itself. But we
will have to pay extra cost for redirection.
I am not sure that finally it will allow to reach better performance.
More flexible solution in many cases doesn't mean more efficient
solution.

I think you can take the best of both worlds.

You can take your approach of passing around fds, and build a "load
balancing protocol" in a bgworker.

The postmaster sends the socket to the bgworker, the bgworker waits for
a command as pgbouncer does, but instead of proxying everything, when
commands arrive, it passes the socket to a backend to handle.

That way, the bgworker can do what pgbouncer does, handle different
pooling modes, match backends to databases, etc, but it doesn't have to
proxy all data, it just delegates handling of a command to a backend,
and forgets about that socket.

Sounds like it could work.

How could it do all that without actually processing all the data? For
example, how could it determine the statement/transaction boundaries?

It only needs to determine statement/transaction start.

After that, it hands off the connection to a backend, and the backend
determines when to give it back.

So instead of processing all the data, it only processes a tiny part of it.

#26Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Claudio Freire (#25)
Re: Built-in connection pooling

On 01/19/2018 07:35 PM, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 2:22 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:

On 01/19/2018 06:13 PM, Claudio Freire wrote:

On Fri, Jan 19, 2018 at 2:07 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>

<mailto:k.knizhnik@postgrespro.ru
<mailto:k.knizhnik@postgrespro.ru>>> wrote:

         Well, I haven't said it has to be single-threaded like

pgbouncer. I

         don't see why the bgworker could not use multiple threads
         internally (of
         course, it'd need to be not to mess the stuff that is not
         thread-safe).

     Certainly architecture with N multiple scheduling bgworkers and M
     executors (backends) may be more flexible
     than solution when scheduling is done in executor itself. But we
     will have to pay extra cost for redirection.
     I am not sure that finally it will allow to reach better

performance.

     More flexible solution in many cases doesn't mean more efficient
     solution.

I think you can take the best of both worlds.

You can take your approach of passing around fds, and build a "load
balancing protocol" in a bgworker.

The postmaster sends the socket to the bgworker, the bgworker

waits for

a command as pgbouncer does, but instead of proxying everything, when
commands arrive, it passes the socket to a backend to handle.

That way, the bgworker can do what pgbouncer does, handle different
pooling modes, match backends to databases, etc, but it doesn't

have to

proxy all data, it just delegates handling of a command to a backend,
and forgets about that socket.

Sounds like it could work.

How could it do all that without actually processing all the data? For
example, how could it determine the statement/transaction boundaries?

It only needs to determine statement/transaction start.

After that, it hands off the connection to a backend, and the
backend determines when to give it back.

So instead of processing all the data, it only processes a tiny part of it.

How exactly would the backend "give back" the connection? The only way
for the backend and pgbouncer to communicate is by embedding information
in the data stream. Which means pgbouncer still has to parse it.

Furthermore, those are not the only bits of information pgbouncer may
need. For example, if pgbouncer gets improved to handle prepared
statements (which is likely) it'd need to handle PARSE/BIND/EXECUTE. And
it already needs to handle SET parameters. And so on.

In any case, this discussion is somewhat off topic in this thread, so
let's not hijack it.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#27Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#24)
1 attachment(s)
Re: Built-in connection pooling

On 19.01.2018 20:28, Tomas Vondra wrote:

With pgbouncer you will never be able to use prepared statements which
slows down simple queries almost twice (unless my patch with
autoprepared statements is committed).

I don't see why that wouldn't be possible? Perhaps not for prepared
statements with simple protocol, but I'm pretty sure it's doable for
extended protocol (which seems like a reasonable limitation).

That being said, I think it's a mistake to turn this thread into a
pgbouncer vs. the world battle. I could name things that are possible
only with standalone connection pool - e.g. pausing connections and
restarting the database without interrupting the clients.

But that does not mean built-in connection pool is not useful.

regards

Sorry, I do not understand how extended protocol can help to handle
prepared statements without shared prepared statement cache or built-in
connection pooling.
The problems is that now in Postgres most of caches including catalog
cache, relation cache, prepared statements cache are private to a backend.
There is certainly one big advantage of such approach: no need to
synchronize access to the cache. But it seems to be the only advantage.
And there are a lot of drawbacks:
inefficient use of memory, complex invalidation mechanism, not
compatible with connection pooling...

So there are three possible ways (may be more, but I know only three):
1. Implement built-in connection pooling which will be aware of proper
use of local caches. This is what I have implemented with the proposed
approach.
2. Implicit autoprepare. Clients will not be able to use standard
Postgres prepare mechanism, but executor will try to generate generic
plan for ordinary queries. My implementation of this approach is at
commit fest.
3. Global caches. It seems to be the best solution but the most
difficult to implement.

Actually I think that the discussion about the value of built-in
connection pooling is very important.
Yes, external connection pooling is more flexible. It allows to perform
pooling either at client side either at server side (or even combine two
approaches).
Also external connection pooling for PostgreSQL is not limited by
pgbouncer/pgpool.
There are many frameworks maintaining their own connection pool, for
example J2EE, jboss, hibernate,...
I have a filling than about 70% of enterprise systems working with
databases are written in Java and doing connection pooling in their own way.
So may be embedded connection pooling is not needed for such applications...
But what I have heard from main people is that Postgres' poor connection
pooling is one of the main drawbacks of Postgres complicating it's usage
in enterprise environments.

In any case please find updated patch with some code cleanup and more
comments added.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-4.patchtext/x-patch; name=session_pool-4.patchDownload
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..8e8a737 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,32 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+/*
+ * Drop all statements prepared in the specified session.
+ */
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..7f40edb 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,17 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..7b36923
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,89 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock)
+{
+    struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+    struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..2554075 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -182,6 +183,15 @@ typedef struct bkend
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
+/*
+ * Pointer in backend list used to implement round-robin distribution of sessions through backends.
+ * This variable either NULL, either points to the normal backend.
+ */
+static Backend*   BackendListClockPtr;
+/*
+ * Number of active normal backends
+ */
+static int        nNormalBackends;
 
 #ifdef EXEC_BACKEND
 static Backend *ShmemBackendArray;
@@ -412,7 +422,6 @@ static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
 static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -568,6 +577,22 @@ HANDLE		PostmasterHandle;
 #endif
 
 /*
+ * Move current backend pointer to the next normal backend.
+ * This function is called either when new session is started to implement round-robin policy, either when backend pointer by BackendListClockPtr is terminated
+ */
+static void AdvanceBackendListClockPtr(void)
+{
+	Backend* b = BackendListClockPtr;
+	do {
+		dlist_node* node = &b->elem;
+		node = node->next ? node->next : BackendList.head.next;
+		b = dlist_container(Backend, elem, node);
+	} while (b->bkend_type != BACKEND_TYPE_NORMAL && b != BackendListClockPtr);
+
+	BackendListClockPtr = b;
+}
+
+/*
  * Postmaster main entry point
  */
 void
@@ -1944,8 +1969,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -2043,7 +2068,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2098,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2449,7 +2474,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -3236,6 +3261,24 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL)
+	{
+		if (bp == BackendListClockPtr)
+			AdvanceBackendListClockPtr();
+		if (bp->session_send_sock != PGINVALID_SOCKET)
+			close(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+		nNormalBackends -= 1;
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3355,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3457,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4017,6 +4058,20 @@ BackendStartup(Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	int         session_pipe[2];
+
+	if (SessionPoolSize != 0 && nNormalBackends >= SessionPoolSize)
+	{
+		/* In case of session pooling instead of spawning new backend open new session at one of the existed backends. */
+		Assert(BackendListClockPtr && BackendListClockPtr->session_send_sock != PGINVALID_SOCKET);
+		elog(DEBUG2, "Start new session for socket %d at backend %d total %d", port->sock, BackendListClockPtr->pid, nNormalBackends);
+		/* Send connection socket to the backend pointed by BackendListClockPtr */
+		if (pg_send_sock(BackendListClockPtr->session_send_sock, port->sock) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+		AdvanceBackendListClockPtr(); /* round-robin backends */
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
@@ -4030,7 +4085,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4117,24 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (SessionPoolSize != 0)
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (SessionPoolSize != 0)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4176,19 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	if (SessionPoolSize != 0)
+	{
+		bn->session_send_sock = session_pipe[1]; /* Use this socket for sending client session socket descriptor */
+		close(session_pipe[0]); /* Close unused end of the pipe */
+	}
+	else
+		bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
-
+	if (BackendListClockPtr == NULL)
+		BackendListClockPtr = bn;
+	nNormalBackends += 1;
+	elog(DEBUG2, "Start backend %d total %d", pid, nNormalBackends);
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4375,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..9c42fab 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,7 +130,7 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
 static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
 #endif
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,6 +669,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
 	Assert(set->nevents < set->nevents_space);
@@ -690,8 +693,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,7 +732,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -727,6 +741,27 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,7 +809,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
 	WaitEventAdjustWin32(set, event);
 #endif
@@ -827,14 +862,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..7dc8049 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -75,8 +75,18 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
+/*
+ * Information associated with client session
+ */
+typedef struct SessionContext
+{
+	MemoryContext memory; /* memory context used for global session data (replacement of TopMemoryContext) */
+	Port* port;           /* connection port */
+	char* id;             /* session identifier used to construct unique prepared statement names */
+} SessionContext;
 
 /* ----------------
  *		global variables
@@ -98,6 +108,8 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */ 
+pgsocket    SessionPoolSock = PGINVALID_SOCKET;
 
 
 /* ----------------
@@ -169,6 +181,13 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet*   SessionPool;    /* Set of all sessions sockets */
+static int64           SessionCount;   /* Number of sessions */
+static SessionContext* CurrentSession; /* Pointer to the active session */
+static Port*           BackendPort;    /* Reference to the original port of this backend created when this backend was launched.
+										* Session using this port may be already terminated, but since it is allocated in TopMemoryContext,
+										* its content is still valid and is used as template for ports of new sessions */
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +213,25 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return pstrdup(buf);
+}
+
+/*
+ * Free all memory associated with session and delete session object itself
+ */
+static void DeleteSession(SessionContext* session)
+{
+	elog(DEBUG1, "Delete session %p, id=%s,  memory context=%p", session, session->id, session->memory);
+	MemoryContextDelete(session->memory);
+	free(session);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1232,6 +1270,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (CurrentSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSession->id, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1547,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (CurrentSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSession->id, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2375,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (CurrentSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", CurrentSession->id, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3659,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3709,21 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session for this backend in case of session pooling */
+	if (SessionPoolSize != 0)
+	{
+		MemoryContext oldcontext;
+		CurrentSession = (SessionContext*)malloc(sizeof(SessionContext));
+		CurrentSession->memory = AllocSetContextCreate(TopMemoryContext,
+													   "SessionMemoryContext",
+													   ALLOCSET_DEFAULT_SIZES);
+		oldcontext = MemoryContextSwitchTo(CurrentSession->memory);
+		CurrentSession->id = CreateSessionId();
+		CurrentSession->port = MyProcPort;
+		BackendPort = MyProcPort;
+		MemoryContextSwitchTo(oldcontext);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3853,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4139,142 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling, rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				if (SessionPool == NULL)
+				{
+					/* Construct wait event set if not constructed yet */
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions);
+					/* Add event to detect postmaster death */
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, CurrentSession);
+					/* Add event for backends latch */
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, CurrentSession);
+					/* Add event for accepting new sessions */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, SessionPoolSock, NULL, CurrentSession);
+					/* Add event for current session */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, CurrentSession);
+				}
+			  ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */ 
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client.fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */ 
+					int		 status;
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock < 0)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					session = (SessionContext*)malloc(sizeof(SessionContext));
+					session->memory = AllocSetContextCreate(TopMemoryContext,
+															"SessionMemoryContext",
+															ALLOCSET_DEFAULT_SIZES);
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					port = palloc(sizeof(Port));
+					memcpy(port, BackendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port->sock = sock;
+					session->port = port;
+					session->id = CreateSessionId();
+
+					MyProcPort = port;
+					status = ProcessStartupPacket(port, false, session->memory);
+					MemoryContextSwitchTo(oldcontext);
+
+					/*
+					 * TODO: Currently we assume that all sessions are accessing the same database under the same user.
+					 * Just report an error if  it is not true
+					 */
+					if (strcmp(port->database_name, MyProcPort->database_name) ||
+						strcmp(port->user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port->database_name, port->user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK)
+					{
+						elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+							 sock, MyProcPid, port->database_name, port->user_name);
+						CurrentSession = session;
+						AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, session);
+
+						SetCurrentStatementStartTimestamp();
+						StartTransactionCommand();
+						PerformAuthentication(MyProcPort);
+						CommitTransactionCommand();
+
+						/*
+						 * Send GUC options to the client
+						 */
+						BeginReportingGUCOptions();
+
+						/*
+						 * Send this backend's cancellation info to the frontend.
+						 */
+						pq_beginmessage(&buf, 'K');
+						pq_sendint32(&buf, (int32) MyProcPid);
+						pq_sendint32(&buf, (int32) MyCancelKey);
+						pq_endmessage(&buf);
+
+						/* Need not flush since ReadyForQuery will do it. */
+						send_ready_for_query = true;
+						continue;
+					}
+					else
+					{
+						/* Error while processing of startup package
+						 * Reject this session and return back to listening sockets
+						 */
+						DeleteSession(session);
+						elog(LOG, "Session startup failed");
+						close(sock);
+						goto ChooseSession;
+					}
+				}
+				else
+				{
+					elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+					CurrentSession = (SessionContext*)ready_client.user_data;
+					MyProcPort = CurrentSession->port;
+				}
+			}
 		}
 
 		/*
@@ -4350,6 +4556,39 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+					pq_getmsgend(&input_message);
+					if (pq_is_reading_msg())
+						pq_endmsgread();
+
+					close(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+					MyProcPort = NULL;
+
+					if (CurrentSession)
+					{
+						DropSessionPreparedStatements(CurrentSession->id);
+						DeleteSession(CurrentSession);
+						CurrentSession = NULL;
+					}
+					whereToSendOutput = DestRemote;
+					/* Need to perform rescheduling to some other session or accept new session */
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..b2f43a8 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,9 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..9202728 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,29 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..a9f9228 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,8 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -420,6 +422,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..c14a20d 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..191eeaa 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -34,6 +34,7 @@ extern CommandDest whereToSendOutput;
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
#28Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#27)
Re: Built-in connection pooling

On 01/22/2018 05:05 PM, Konstantin Knizhnik wrote:

On 19.01.2018 20:28, Tomas Vondra wrote:

With pgbouncer you will never be able to use prepared statements which
slows down simple queries almost twice (unless my patch with
autoprepared statements is committed).

I don't see why that wouldn't be possible? Perhaps not for prepared
statements with simple protocol, but I'm pretty sure it's doable for
extended protocol (which seems like a reasonable limitation).

That being said, I think it's a mistake to turn this thread into a
pgbouncer vs. the world battle. I could name things that are possible
only with standalone connection pool - e.g. pausing connections and
restarting the database without interrupting the clients.

But that does not mean built-in connection pool is not useful.

regards

Sorry, I do not understand how extended protocol can help to handle
prepared statements without shared prepared statement cache or
built-in connection pooling.

The extended protocol makes it easy for pgbouncer (or any other proxy)
to identify prepared statements, so that it can track (a) which prepared
statements a client defined, and (b) what prepared statements are
defined on a connection. And then do something when a client gets
assigned a connection missing some of those.

I do not claim doing this would be trivial, but I don't see why would
that be impossible.

Of course, the built-in pool can handle this in different ways, as it
has access to the internal caches.

The problems is that now in Postgres most of caches including catalog
cache, relation cache, prepared statements cache are private to a backend.

True. I wouldn't say it's a "problem" but it's certainly a challenge for
certain features.

There is certainly one big advantage of such approach: no need to
synchronize access to the cache. But it seems to be the only advantage.
And there are a lot of drawbacks:
inefficient use of memory, complex invalidation mechanism, not
compatible with connection pooling...

Perhaps. I personally see the minimal synchronization as a quite
valuable feature.

So there are three possible ways (may be more, but I know only three):
1. Implement built-in connection pooling which will be aware of proper
use of local caches. This is what I have implemented with the proposed
approach.
2. Implicit autoprepare. Clients will not be able to use standard
Postgres prepare mechanism, but executor will try to generate generic
plan for ordinary queries. My implementation of this approach is at
commit fest.
3. Global caches. It seems to be the best solution but the most
difficult to implement.

Perhaps.

Actually I think that the discussion about the value of built-in
connection pooling is very important.

I agree, and I wasn't speaking against built-in connection pooling.

Yes, external connection pooling is more flexible. It allows to
perform pooling either at client side either at server side (or even
combine two approaches).>
Also external connection pooling for PostgreSQL is not limited by
pgbouncer/pgpool.>
There are many frameworks maintaining their own connection pool, for
example J2EE, jboss, hibernate,...>
I have a filling than about 70% of enterprise systems working with
databases are written in Java and doing connection pooling in their
own way.>

True, but that does not really mean we don't need "our" connection
pooling (built-in or not). The connection pools are usually built into
the application servers, so each application server has their own
independent pool. With larger deployments (a couple of application
servers) that quickly causes problems with max_connections.

So may be embedded connection pooling is not needed for such
applications...

But what I have heard from main people is that Postgres' poor
connection pooling is one of the main drawbacks of Postgres
complicating it's usage in enterprise environments.

Maybe. I'm sure there's room for improvement.

That being said, when enterprise developers tell me PostgreSQL is
missing some feature, 99% of the time it turns out they're doing
something quite stupid.

In any case please find updated patch with some code cleanup and
more comments added.

OK, will look.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#29Bruce Momjian
bruce@momjian.us
In reply to: Tomas Vondra (#28)
Re: Built-in connection pooling

On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:

Yes, external connection pooling is more flexible. It allows to
perform pooling either at client side either at server side (or even
combine two approaches).>
Also external connection pooling for PostgreSQL is not limited by
pgbouncer/pgpool.>
There are many frameworks maintaining their own connection pool, for
example J2EE, jboss, hibernate,...>
I have a filling than about 70% of enterprise systems working with
databases are written in Java and doing connection pooling in their
own way.>

True, but that does not really mean we don't need "our" connection
pooling (built-in or not). The connection pools are usually built into
the application servers, so each application server has their own
independent pool. With larger deployments (a couple of application
servers) that quickly causes problems with max_connections.

I found this thread and the pthread thread very interesting. Konstantin,
thank you for writing prototypes and giving us very useful benchmarks
for ideas I thought I might never see.

As much as I would like to move forward with coding, I would like to
back up and understand where we need to go with these ideas.

First, it looks like pthreads and a builtin pooler help mostly with
1000+ connections. It seems like you found that pthreads wasn't
sufficient and the builtin pooler was better. Is that correct?

Is there anything we can do differently about allowing long-idle
connections to reduce their resource usage, e.g. free their caches?
Remove from PGPROC? Could we do it conditionally, e.g. only sessions
that don't have open transactions or cursors?

It feels like user and db mismatches are always going to cause pooling
problems. Could we actually exit and restart connections that have
default session state?

Right now, if you hit max_connections, we start rejecting new
connections. Would it make sense to allow an option to exit idle
connections when this happens so new users can connect?

I know we have relied on external connection poolers to solve all the
high connection problems but it seems there might be simple things we
can do to improve matters. FYI, I did write a blog entry comparing
external and internal connection poolers:

https://momjian.us/main/blogs/pgblog/2017.html#April_21_2017

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#30Ivan Novick
inovick@pivotal.io
In reply to: Bruce Momjian (#29)
Re: Built-in connection pooling

On Sat, Jan 27, 2018 at 4:40 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:
Right now, if you hit max_connections, we start rejecting new
connections. Would it make sense to allow an option to exit idle
connections when this happens so new users can connect?

A lot of users have bash scripts to check the system periodically and canel
idle connections to prevent other users from getting rejected by max
connections. They do this on a timer, like if the session appears to be
idle more than 10 minutes.

I know we have relied on external connection poolers to solve all the
high connection problems but it seems there might be simple things we
can do to improve matters. FYI, I did write a blog entry comparing
external and internal connection poolers:

Yes, that would be great.

The simplest thing sounds like a GUC that will automitcally end a
connection idle for X seconds.

Another option could be as you suggested, Bruce, if a user would have
failed because of max connections already reached, then terminate the
connection that has been idle the longest and allow a new connection to
come in.

These would greatly improve user experience as most folks have to automate
this all themselves anyway.

Cheers,
Ivan

#31Bruce Momjian
bruce@momjian.us
In reply to: Ivan Novick (#30)
Re: Built-in connection pooling

On Sun, Jan 28, 2018 at 02:01:07PM -0800, Ivan Novick wrote:

On Sat, Jan 27, 2018 at 4:40 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:
Right now, if you hit max_connections, we start rejecting new
connections.� Would it make sense to allow an option to exit idle
connections when this happens so new users can connect?

A lot of users have bash scripts to check the system periodically and canel
idle connections to prevent other users from getting rejected by max
connections.� They do this on a timer, like if the session appears to be idle
more than 10 minutes.
�

I know we have relied on external connection poolers to solve all the
high connection problems but it seems there might be simple things we
can do to improve matters.� FYI, I did write a blog entry comparing
external and internal connection poolers:

Yes, that would be great.

The simplest thing sounds like a GUC that will automitcally end a connection
idle for X seconds.

Uh, we already have idle_in_transaction_session_timeout so we would just
need a simpler version.

Another option could be as you suggested, Bruce, if a user would have failed
because of max connections already reached, then terminate the connection that
has been idle the longest and allow a new connection to come in.

These would greatly improve user experience as most folks have to automate this
all themselves anyway.

Plus the ability to auto-free resources like cached system tables if the
backend is idle for a specified duration.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#32Ivan Novick
inovick@pivotal.io
In reply to: Bruce Momjian (#31)
Re: Built-in connection pooling

The simplest thing sounds like a GUC that will automitcally end a

connection

idle for X seconds.

Uh, we already have idle_in_transaction_session_timeout so we would just
need a simpler version.

Oh i see its in 9.6, AWESOME!

Cheers

#33Bruce Momjian
bruce@momjian.us
In reply to: Ivan Novick (#32)
Re: Built-in connection pooling

On Sun, Jan 28, 2018 at 03:11:25PM -0800, Ivan Novick wrote:

The simplest thing sounds like a GUC that will automitcally end a connection

idle for X seconds.

Uh, we already have idle_in_transaction_session_timeout so we would just
need a simpler version.

Oh i see its in 9.6, AWESOME!�

In summary, the good news is that adding an idle-session-timeout GUC, a
max_connections limit hit cancels idle connections GUC, and a GUC for
idle connections to reduce their resource usage shouldn't be too hard to
implement and will provide useful benefits.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#34Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Bruce Momjian (#29)
Re: Built-in connection pooling

On 28.01.2018 03:40, Bruce Momjian wrote:

On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:

Yes, external connection pooling is more flexible. It allows to
perform pooling either at client side either at server side (or even
combine two approaches).>
Also external connection pooling for PostgreSQL is not limited by
pgbouncer/pgpool.>
There are many frameworks maintaining their own connection pool, for
example J2EE, jboss, hibernate,...>
I have a filling than about 70% of enterprise systems working with
databases are written in Java and doing connection pooling in their
own way.>

True, but that does not really mean we don't need "our" connection
pooling (built-in or not). The connection pools are usually built into
the application servers, so each application server has their own
independent pool. With larger deployments (a couple of application
servers) that quickly causes problems with max_connections.

I found this thread and the pthread thread very interesting. Konstantin,
thank you for writing prototypes and giving us very useful benchmarks
for ideas I thought I might never see.

As much as I would like to move forward with coding, I would like to
back up and understand where we need to go with these ideas.

First, it looks like pthreads and a builtin pooler help mostly with
1000+ connections. It seems like you found that pthreads wasn't
sufficient and the builtin pooler was better. Is that correct?

Brief answer is yes.
Pthreads allows to minimize per-connection overhead and make it possible
to obtain better results for large number of connections.
But there is a principle problem: Postgres connection is "heave weight" 
object: each connection maintains it own private cache of catalog,
relations, temporary
table pages, prepared statements,... So even through pthreads allows to
minimize per-connection memory usage, it is negligible comparing with
all this connection
private memory resources.  It means that we still need to use connection
pooling.

Pthreads provides two main advantages:
1. Simplify interaction between different workers: on need to use shared
memory with it's fixed size limitation and
impossibility to use normal pointer for dynamic shared memory. Also no
need to implement specialized memory allocator for shared memory.
It makes implementation of parallel query execution and built-on
connection pooling much easier.
2. Optimize virtual-to-physical address translation. There is no need to
maintain separate address space for each backend, so TLB(translation
lookaside buffercan) becomes more efficient.

So it is not completely correct to consider session pooling as
alternative to pthreads.
Ideally this two approaches should be combined.

Is there anything we can do differently about allowing long-idle
connections to reduce their resource usage, e.g. free their caches?
Remove from PGPROC? Could we do it conditionally, e.g. only sessions
that don't have open transactions or cursors?

I think that the best approach is to switch to global (shared) caches
for execution plans, catalog,...
Most of the time this metadata caches are used to be identical for all
clients. So it is just waste of memory and time to maintain them
separately in each backend.
Certainly shared cached requires some synchronization when can be a
point of contention and cause significant degrade of performance.
But taking in account that metadata is updated much rarely than data, I
hope using copy-on-write and atomic operations can help to solve this
problems.
And in can give a lot of different advantages. For example it will be
possible to spend more time in optimizer for detecting optimal execution
plan and store manually plans for
future use.

It feels like user and db mismatches are always going to cause pooling
problems. Could we actually exit and restart connections that have
default session state?

Well, combining multiuser access and connection pooling is really a
challenged problem.
I do not know the best solution for it now. It will be much simpler to
find solution with pthreads model...

Most of enterprise systems are using pgbouncer or similar connection
pooler. In pgbouncer in statement/transaction pooling mode access to the
database is performed under the same user. So it means that many existed
statements are built in the assumption that database is accessed in this
manner.

Concerning "default session state": one of the main drawbacks of
pgbouncer and other external poolers is that them do not allow to use
prepared statements.
And it leads to up to two times performance penalty on typical OLTP
queries. One of the main ideads of built-on session pooling was to
eliminate such limitation.

Right now, if you hit max_connections, we start rejecting new
connections. Would it make sense to allow an option to exit idle
connections when this happens so new users can connect?

It will require changes in client applications, will not it? Them should
be ready that connection can be dropped by server at any moment of time.
I do not know it is possible to drop idle connection and hide this fact
from the client. In my implementation each session keeps minimal
necessary information requires for interaction with client (session
context).  It includes socket, struct Port and session memory context
which should be used instead of TopMemoryContext for session specific data.

I know we have relied on external connection poolers to solve all the
high connection problems but it seems there might be simple things we
can do to improve matters. FYI, I did write a blog entry comparing
external and internal connection poolers:

https://momjian.us/main/blogs/pgblog/2017.html#April_21_2017

I completely agree with your arguments in this post.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#35Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#34)
Re: Built-in connection pooling

On Mon, Jan 29, 2018 at 11:57:36AM +0300, Konstantin Knizhnik wrote:

Right now, if you hit max_connections, we start rejecting new
connections. Would it make sense to allow an option to exit idle
connections when this happens so new users can connect?

It will require changes in client applications, will not it? Them should be
ready that connection can be dropped by server at any moment of time.
I do not know it is possible to drop idle connection and hide this fact from
the client. In my implementation each session keeps minimal necessary
information requires for interaction with client (session context).� It
includes socket, struct Port and session memory context which should be used
instead of TopMemoryContext for session specific data.

Yes, it would impact applications and you are right most applications
could not handle that cleanly. It is probably better to look into
freeing resources for idle connections instead and keep the socket open.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#36Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Bruce Momjian (#35)
Re: Built-in connection pooling

Bruce>Yes, it would impact applications and you are right most applications
could not handle that cleanly.

I would disagree here.
We are discussing applications that produce "lots of idle" connections,
aren't we? That typically comes from an application-level connection pool.
Most of the connection pools have a setting that would "validate"
connection in case it was not used for a certain period of time.

That plays nicely in case server drops "idle, not in a transaction"
connection.

Of course, there are cases when application just grabs a connection from a
pool and uses it in a non-transacted way (e.g. does some action once an
hour and commits immediately). However that kind of application would
already face firewalls, etc. I mean the application should already be
prepared to handle "network issues".

Bruce> It is probably better to look into
Bruce>freeing resources for idle connections instead and keep the socket
open.

The application might expect for the session-specific data to be present,
so it might be even worse if the database deallocates all the things but
TCP connection.

For instance: application might expect for the server-prepared statements
to be there. Would you deallocate server-prepared statements for those
"idle" connections? The app would just break. There's no way (currently)
for the application to know that the statement expired unexpectedly.

Vladimir

#37Bruce Momjian
bruce@momjian.us
In reply to: Vladimir Sitnikov (#36)
Re: Built-in connection pooling

On Mon, Jan 29, 2018 at 04:02:22PM +0000, Vladimir Sitnikov wrote:

Bruce>Yes, it would impact applications and you are right most applications
could not handle that cleanly.

I would disagree here.
We are discussing applications that produce "lots of idle" connections, aren't
we? That typically comes from an application-level connection pool.
Most of the connection pools have a setting that would "validate" connection in
case it was not used for a certain period of time.

That plays nicely in case server drops "idle, not in a transaction" connection.

Well, we could have the connection pooler disconnect those, right?

Of course, there are cases when application just grabs a connection from a pool
and uses it in a non-transacted way (e.g. does some action once an hour and
commits immediately). However that kind of application would already face
firewalls, etc. I mean the application should already be prepared to handle
"network issues".

Bruce> It is probably better to look into
Bruce>freeing resources for idle connections instead and keep the socket open.

The application might expect for the session-specific data to be present, so it
might be even worse if the database deallocates all the things but TCP
connection.

For instance: application might expect for the server-prepared statements to be
there. Would you deallocate server-prepared statements for those "idle"
connections? The app would just break. There's no way (currently) for the
application to know that the statement expired unexpectedly.

I don't know what we would deallocate yet.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#38Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Bruce Momjian (#37)
Re: Built-in connection pooling

Bruce>Well, we could have the connection pooler disconnect those, right?

I agree. Do you think we could rely on all the applications being
configured in a sane way?
A fallback configuration at DB level could still be useful to ensure the DB
keeps running in case multiple applications access it. It might be
non-trivial to ensure proper configurations across all the apps.

What I do like is the behaviour of dropping connections should already be
considered by existing applications, so it should fit naturally to the
existing apps.

Alternative approach might be to dump to disk relevant resources for
inactive sessions, so the session could be recreated in case the connection
is requested again after a long pause (e.g. reprepare all the statements),
however it sounds scary.

Vladimir

#39Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#3)
Re: Built-in connection pooling

I have obtained more results with YCSB benchmark and built-in connection
pooling.
Explanation of the benchmark and all results for vanilla Postgres and
Mongo are available in Oleg Bartunov presentation about  JSON (at the
end of presentation):

http://www.sai.msu.su/~megera/postgres/talks/sqljson-pgconf.eu-2017.pdf

as you can see, Postgres shows significant slow down with increasing
number of connections in case of conflicting updates.
Built-in connection pooling can somehow eliminate this problem:

Workload-B (5% of updates) ops/sec:

Session pool size/clients
250
500
750
1000
0
151511
78078
48742
30186
32
522347
543863
546971
540462
64
736323
770010
763649
763358
128
245167
241377
243322
232482
256
144964
146723
149317
141049

Here the maximum is obtained near 70 backends which corresponds to the
number of physical cores at the target system.

But for workload A (50% of updates), optimum is achieved at much smaller
number of backends, after which we get very fast performance degradation:

Session pool size
kops/sec
16
220
30
353
32
362
40
120
70
53
256
20

Here the maximum is reached at 32 backends and with 70 backends
performance is 6 times worser.
It means that it is difficult to find optimal size of session pool if we
have varying workload.
If we set it too large, then we get high contention of conflicting
update queries, if it is too small, then we do not utilize all system
resource on read-only or not conflicting queries.

Look like we have to do something with Postgres locking mechanism and
may be implement some contention aware scheduler as described here:

http://www.vldb.org/pvldb/vol11/p648-tian.pdf

But this is a different story, not related to built-in connection pooling.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#40Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Konstantin Knizhnik (#39)
Re: Built-in connection pooling

Konstantin>I have obtained more results with YCSB benchmark and built-in
connection pooling

Could you provide more information on the benchmark setup you have used?
For instance: benchmark library versions, PostgreSQL client version,
additional/default benchmark parameters.

Konstantin>Postgres shows significant slow down with increasing number of
connections in case of conflicting updates.
Konstantin>Built-in connection pooling can somehow eliminate this problem

Can you please clarify how connection pooling eliminates slow down?
Is the case as follows?
1) The application updates multiple of rows in a single transaction
2) There are multiple concurrent threads
3) The threads update the same rows at the same time

If that is the case, then the actual workload is different each time you
vary connection pool size.
For instance, if you use 1 thread, then the writes become uncontended.

Of course, you might use just it as a "black box" workload, however I
wonder if that kind of workload ever appears in a real-life applications. I
would expect for the applications to update the same row multiple times,
however I would expect the app is doing subsequent updates, not the
concurrent ones.

On the other hand, as you vary the pool size, the workload varies as well
(the resulting database contents is different), so it looks like comparing
apples to oranges.

Vladimir

Show quoted text
#41Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Vladimir Sitnikov (#40)
Re: Built-in connection pooling

On 01.02.2018 15:21, Vladimir Sitnikov wrote:

Konstantin>I have obtained more results with YCSB benchmark and
built-in connection pooling

Could you provide more information on the benchmark setup you have used?
For instance: benchmark library versions, PostgreSQL client version,
additional/default benchmark parameters.

I am using the latest Postgres sources with applied connection pooling
patch.
I have not built YCSB myself, use existed installation.

To launch tests I used the following YCSB command line:

To load data:
YCSB_MAXRUNTIME=60 YCSB_OPS=1000000000 YCSB_DBS="pgjsonb-local"
YCSB_CFG="bt" YCSB_CLIENTS="250" YCSB_WORKLOADS="load_a" ./ycsb.sh

To run test:
YCSB_MAXRUNTIME=60 YCSB_OPS=1000000000 YCSB_DBS="pgjsonb-local"
YCSB_CFG="bt" YCSB_CLIENTS="250 500 750 1000" YCSB_WORKLOADS="run_a"
./ycsb.sh

$ cat config/pgjsonb-local.dat
db.driver=org.postgresql.Driver
db.url=jdbc:postgresql://localhost:5432/ycsb
db.user=ycsb
db.passwd=ycsb
db.batchsize=100
jdbc.batchupdateapi=true
table=usertable

Konstantin>Postgres shows significant slow down with increasing number
of connections in case of conflicting updates.
Konstantin>Built-in connection pooling can somehow eliminate this problem

Can you please clarify how connection pooling eliminates slow down?
Is the case as follows?
1) The application updates multiple of rows in a single transaction
2) There are multiple concurrent threads
3) The threads update the same rows at the same time

If that is the case, then the actual workload is different each time
you vary connection pool size.
For instance, if you use 1 thread, then the writes become uncontended.

Of course, you might use just it as a "black box" workload, however I
wonder if that kind of workload ever appears in a real-life
applications. I would expect for the applications to update the same
row multiple times, however I would expect the app is doing subsequent
updates, not the concurrent ones.

On the other hand, as you vary the pool size, the workload varies as
well (the resulting database contents is different), so it looks like
comparing apples to oranges.

Vladimir

Sorry, I am not sure that I completely understand your question.
YCSB (Yahoo! Cloud Serving Benchmark) framework is essentially
multiclient benchmark which assumes larger number concurrent requests to
the database.
Requests themselves are used to be very simple (benchmark emulates
key-vlaue storage).
In my tests I perform measurements for 250, 500, 750 and 1000 connections.

One of the main problems of Postgres is significant degrade of
performance in case of concurrent write access by multiple transactions
to the same sows.
This is why performance of pgbench and YCSB benchmark significantly
(more then linear) degrades with increasing number of client connections
especially in case o Zipf distribution
(significantly increasing possibility of conflict).

Connection pooling allows to fix number of backends and serve almost any
number of connections using fixed size of backends.
So results are almost the same for 250, 500, 750 and 1000 connections.
The problem is choosing optimal number of backends.

For readonly pgbench best results are achieved for 300 backends, for
YCSB with 5% of updates - for 70 backends, for YCSB with 50% of updates
- for 30 backends.
So something definitely need to be changes in Postgres locking mechanism.

Connection pooling allows to minimize contention on resource and degrade
of performance caused by such contention.
But unfortunately it is not a silver bullet fixing all Postgres
scalability problems.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#42Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Konstantin Knizhnik (#41)
Re: Built-in connection pooling

Konstantin>I have not built YCSB myself, use existed installation.

Which pgjdbc version was in use?

Konstantin>One of the main problems of Postgres is significant degrade of
performance in case of concurrent write access by multiple transactions to
the same sows.

I would consider that a workload "problem" rather than PostgreSQL problem.
That is, if an application (e.g. YCSB) is trying to update the same rows in
multiple transactions concurrently, then the outcome of such updates is
likely to be unpredictable. Does it make sense?

At least, I do not see why Mongo would degrade in a different way there.
Oleg's charts suggest that Mongo does not degrade there, so I wonder if we
compare apples to apples in the first place.

Vladimir

#43Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Vladimir Sitnikov (#42)
Re: Built-in connection pooling

On 01.02.2018 16:33, Vladimir Sitnikov wrote:

Konstantin>I have not built YCSB myself, use existed installation.

Which pgjdbc version was in use?

postgresql-9.4.1212.jar

Konstantin>One of the main problems of Postgres is significant degrade
of performance in case of concurrent write access by multiple
transactions to the same sows.

I would consider that a workload "problem" rather than PostgreSQL problem.
That is, if an application (e.g. YCSB) is trying to update the same
rows in multiple transactions concurrently, then the outcome of such
updates is likely to be unpredictable. Does it make sense?

I can't agree with you.
Yes, there are workloads where updates are more or less local: clients
are used to update their own private data.
But there are many systems  with "shared" resources which are
concurrently accessed by different users. They may just increment access
count or perform deposit/withdraw...
Just simple example: consider that you have something like AppStore and
there is some popular application which is bought by a lot of users.
From DBMS point of view a lot of clients perform concurrent update of
the same record.
So performance on such workload is also very important. And
unfortunately here Postgres loses to the competition with mySQL and most
of other DBMSes.

At least, I do not see why Mongo would degrade in a different way
there. Oleg's charts suggest that Mongo does not degrade there, so I
wonder if we compare apples to apples in the first place.

Postgres locks tuples in very inefficient way in case of high contention.
It first lock buffer and checks if tuple is locked by some other backend.
Then it tries to set heavy weight lock on the tuple's tcid. If there are
several processes trying update this tuple, then all of them will be
queued on this heavy-weight tuple lock.
After getting this tuple lock, backend tries to lock tid of transaction
which updated the tuple.
Once transaction updated this tuple is completed, Postgres unblocks
backends waiting for this transaction. It checks status of the tuple and
release tuple's lock, awaken one of waiting clients.
As far as Postgres  using MVCC, it creates new version of the tuple on
each update.
So the tuple all clients are waiting for is not the last version of of
the tuple any more.
Depending on isolation policy them either need to report error (in case
of repeatable read) or update snapshot and repeat search with new
snapshot...
and perform all checks and locks mentioned above once again.

I hope that it is clear from this brief and not so precise explanation
that Postgres has to do a lot of redundant work if several client are
competing for the same tuple.
There is well known rule that pessimistic locking is more efficient than
optimistic in case of high contention.
So Postgres can provide better performance on this workload if it be
more pessimistic:
set lock not on TCID (identifier of particular tuple version), but on
tuple's PK (primary key) and hold it till end of the transaction
(because until transaction is completed nobody still be
able to update this tuple). This trick with locking PK really helps to
improve performance on this workload, but unfortunately can not reverse
the trend with the degradation of performance with increasing number of
competing transactions.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#44Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Konstantin Knizhnik (#43)
Re: Built-in connection pooling

config/pgjsonb-local.dat

Do you use standard "workload" configuration values?
(e.g. recordcount=1000, maxscanlength=100)

Could you share ycsb output (e.g. for workload a)?
I mean lines like
[TOTAL_GC_TIME], Time(ms), xxx
[TOTAL_GC_TIME_%], Time(%), xxx

postgresql-9.4.1212.jar

Ok, you have relevant performance fixes then.

Konstantin>Just simple example: consider that you have something like
AppStore and there is some popular application which is bought by a lot of
users.
Konstantin>From DBMS point of view a lot of clients perform concurrent
update of the same record.

I thought YCSB updated *multiple rows* per transaction. It turns out all
the default YCSB workloads update just one row per transaction. There is no
batching, etc. Batch-related parameters are used at "DB initial load" time
only.

Konstantin>Postgres locks tuples in very inefficient way in case of high
contention

Thank you for the explanation.

Vladimir

#45Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Vladimir Sitnikov (#44)
Re: Built-in connection pooling

On 01.02.2018 23:28, Vladimir Sitnikov wrote:

config/pgjsonb-local.dat

Do you use standard "workload" configuration values?
(e.g. recordcount=1000, maxscanlength=100)

Yes, I used default value for workload. For example, workload-A has the
following settings:

# Yahoo! Cloud System Benchmark
# Workload A: Update heavy workload
#   Application example: Session store recording recent actions
#
#   Read/update ratio: 50/50
#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)
#   Request distribution: zipfian

recordcount=1000
operationcount=1000
workload=com.yahoo.ycsb.workloads.CoreWorkload

readallfields=true

readproportion=0.5
updateproportion=0.5
scanproportion=0
insertproportion=0

requestdistribution=zipfian

Could you share ycsb output (e.g. for workload a)?
I mean lines like
[TOTAL_GC_TIME], Time(ms), xxx
[TOTAL_GC_TIME_%], Time(%), xxx

$ cat results/last/run_pgjsonb-local_workloada_70_bt.out
[OVERALL], RunTime(ms), 60099.0
[OVERALL], Throughput(ops/sec), 50444.83269272367
[TOTAL_GCS_PS_Scavenge], Count, 6.0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 70.0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.11647448376844872
[TOTAL_GCS_PS_MarkSweep], Count, 0.0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0.0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 6.0
[TOTAL_GC_TIME], Time(ms), 70.0
[TOTAL_GC_TIME_%], Time(%), 0.11647448376844872
[READ], Operations, 1516174.0
[READ], AverageLatency(us), 135.802076146933
[READ], MinLatency(us), 57.0
[READ], MaxLatency(us), 23327.0
[READ], 95thPercentileLatency(us), 382.0
[READ], 99thPercentileLatency(us), 828.0
[READ], Return=OK, 1516174
[CLEANUP], Operations, 70.0
[CLEANUP], AverageLatency(us), 134.21428571428572
[CLEANUP], MinLatency(us), 55.0
[CLEANUP], MaxLatency(us), 753.0
[CLEANUP], 95thPercentileLatency(us), 728.0
[CLEANUP], 99thPercentileLatency(us), 750.0
[UPDATE], Operations, 1515510.0
[UPDATE], AverageLatency(us), 2622.6653258639008
[UPDATE], MinLatency(us), 86.0
[UPDATE], MaxLatency(us), 1059839.0
[UPDATE], 95thPercentileLatency(us), 1261.0
[UPDATE], 99thPercentileLatency(us), 87039.0
[UPDATE], Return=OK, 1515510

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#46Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#45)
1 attachment(s)
Re: Built-in connection pooling

Nikita Glukhov has added results of YCSB benchmark with connection
pooling to the common diagram (attached).
As you can see connection pooling provides stable results for all number
backends,  except Workload-E.
I do not have explanation of performance degradation in case of this
particular workload.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

ycsb-zipf-pool.pngimage/png; name=ycsb-zipf-pool.pngDownload
�PNG


IHDR���'W IDATx���y|��8����}�fws'$ 	�#����P�(*�j+`/����<�h���E�E�V�\U��V�3�I !!���#{����'L���Iv'���f�}v��Mx2����<!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!�B!��@7`���)))�
�?���~�a�=f��@ �U�9s����'�K�.=~����`��]��������}��a�K.�:tH�P��
6���K�j��f�X�V�@�"���h��V�^����c��ju�r��s�����������j�0UUU��x��%�H������qO����Y�t���^����������������]�v]�t)��n��mBB
t;�aRSS�J�@7��h4r�YYY��=������^ot[��LF6�JeVVV���1c��E�}���,�F�}����s:����A�T���
t+P/���a�N��R��Q�F�5g��={������k����B,���0iiir�|���5�{*���n���{��T&��d2M�8��?������]�vaO&��3ln�>VEhH������qqq�������y��]{���
��m���?��O�>�Dr����?�!��G� 4�(�������3g���
H{B�S����=���V����.\HvW�X���_P�BW�q���u�]�nuu�����Cnn����I��%K�=��w�
P3B�ez���*�DB��v;��0c����~G�"���W^^�/9q��J������h4�F��2@�C]�(�����)�-���#G^|�E��Mvw���x��{����._��2������g�y&''���^R���7o�L�����_|�l�z��\�Gy����lS5s����ggddh4��y����~��'��I����z�!�}�}��4�r��I�&���������y����cll,X��_�������\N�-!���p����D����z-Kii��]�.^�RS�V/[�l��i������o�y��w�iFWf��E6,���~z��w@\\���#KKK�p@���Y	�Yu���s$(:��2u�=U�{�����#G����������q��,�c���S��3���0L7�tB�?0(�"���c��!&&�{�����U�o�=�l455UVV�m�Z��c��?���V�������~��<����M��h\�n]LLp�;B0�n�:r���z�~���oz���z���o�v�R�Z�NKK�3g��/���7�p/%&&>��sz������%K�dff���n�i�&M�D�8��~���3gbP��agE�Yue��Qd��vWWWG��
3�S�����I����#2����o���'�*�
��"�����8v��=v�X���c����VH���;F���4��_�������l6��w��t���+W�[�n�Z���e�������tS�W-�������#r��W��j�4��_���u�dee���v����M�V69x�`UU��f��]�B!���@��
�bq:ONN��������W�m��r�z{L���SA�{*2�8t�P�u<��e�AEePd�?��s��=p�n�tT����fR��sL�>}��	d�����G��������#G�������n���y����V�7�|�g������n���y����
6>|���UKf��M6jkk|��;����?�����N
�R)�"����SRR�vQQ���������_��&n�%��VTT����HIll,7!�)�����l6����3��r�-��;v����}8&BW����S�:�^)B	�2(2X�=~�8�&(�"wUUU��<��n)X�=q��������z����eY����s�qcb;]<���s?���^|��
6t|V�������lo�����?��Bz��e��������}����.,,��O6n��V�a��~���]�p�n���G���N�M����d<3:t�Lo��20s��^�+
vV�tV�s��g�����u{�h�T*��l���v�k�BB��������#�bq||�R���{�z<�|��
�-++��%%%��C����s��|�	��j������{N���&�[�����z���m�*z��g�}v��u��������r��q�n����������d"�a���I�������

�����	J\,������9s�L���P�������L&{���,X�c"4�`O���K"#���	!4�0�/����H$9rd\\�-***++���"�
�$^�nC,�tq�1�������v;�U�yQ���
�Dr�m����[�_BZ;����\������8.L�am6[sss�I�]��]r�\�12��|�1���F�:w�\�D�
��UW��`gETWWs���D"IJJ���{�t�U�V���W�D�
�=UW��HO�p8�k�h4�2!4�p�����&�O���c��B (++#1���OGG6�r9��������Vn�c�9���}���.\���t��-�0a�_���E�q����_|�E��O�:��-���d������}��.�?p�Lu;+�rg���[^^����s;w��~��
{*�fO�_`;!!��:�G�^xY���B��0(�"��!3f��|��y���
��={6|K
�B����M5�L!����@���������'�"���{���~�Z"�y��\���7����;x��M�6�������9,EQO��0�-�B=��*��U�Z[[[ZZ�6�!!��T���������n���:�-z�2�� ���2(����YYY$C���g��{�V:<y����'�,�r�esssi�{��yyy�6�k8�y�������i���e��W-�����;x��7w��MF�v������Mx��������)����n�����f��������hvV�3����lWWWG��
K�SA�z�C�q��f���q�Tll,����f~B��,��H:}�4I�����b1\�{���%Oc��"������l$$$,Z��+����"g�����_��|��-\����rO"��p_*@*��
���_9,�rkF���]u�Ud[.��������@���������^�*�&����U�:+>�X��3z���������=7��{���{���Tv����>#������&M���Q�F��7���;{�����B��#�P$y<���"��� 7�,����N�8�+?z�(��~���7�,�H`������EEEIII�]w������[���?���_������_����U��[�=�j����x�������S�;vp9z���666����8��&���_����T�������
��1c��Mx!��>�(&&f���_JOO��'�O����>��{�m��%���:�����^g�g6�_y���^---���o�pX���SE����eK^^y�J�����t:+++�z=j��b	s�)��)�"����f�qb����������U__����r�3�L�r�����3����s�Nn�l��_��R��u�]$r���[RWWw��iR�T*o����+W�?�i������{��m���O��`�����`0�Ot�
.(��zC�����=82�L8�I#G����;}����>r��=���������y��?��O����;�(uVajhhX�~=7�!�)����S�\�'�|�R(c���Gd�z��n�	G����2=�z���+|�Z�r%�>s�7�������v�.((��9�}�Ycc�C=2����e����1���%<���������d����|�����@���z�����?����|�q�o�/���_�j��q\�������_������<x��~�s�N2���_njjZ�d	7�������+�/&�`0r-�#F�����#G�t5��������k���i�����N�!���Wd2���_���/v�9����[��'�|������M��w��5k�m��o����t���:��U4:+���\~�������a����w��_�^�;�A���v?��aO��
***~��+V��7��3����>{���m6[�A�W�Bu
�h��������A��rZ�:.�����������z��^ZZZPP�.����%4M���fee���=z�����O�d2��=�d2544|��W�%������=R�T'O���[6o��b�
�.���C!���/_���{��}�f����������������e�"X�3a��0;+��`0����3�����(��N��*''G��9�s��E������l	����������������(Lg%��%�1F����)s���6l@�?>������Q�s	����fF��
���A�S�d��S�&%%i�Z��Q^^^XX�q���������t��G�aP���-kmm-,,����1c***:&���D����C.�j���������	�7���in���R�T���~�_��f��f�u|�
������uuu����a���8��S�<y�V�-**����TXXH�����G���&o_�pa����-[V�X������i���TTT8�N�Z#U�+���{���P(H��w�}W����������,�����jjj���+NGQ�����;&L(a���			���%%%�N����fa���/�?���k��Mjj����ohbb��Y��V��={�|��7�;wn���l�����;N�>}���kN�:5--���S���T���������%K�H$�O?����I��%%%�,+�_g��8g����s�NN����t999���eee�����J�����"\�6w�\�^�o���/��#F�8q��V��l��pVN|>_ccc�L�N�1�����T�?�R��h�9\������� ��h�fF��������`�^�ojjf4lLL��na�N,���
�I���V���g�[T�K�����p���RSS $.PYYY^^���n0,K�����!�B�A��"�P�%$$���GJ���#X!�B!4t�H��$��H0���2L=>>���S���o2����{l��8//�W�Z�666V�$�d��V�
��������'�D0�N)))���H$r�\�Q�999�d�U�T�@�K�UJ��ry~-�F#��B
�V��[1�B�M�r�:�!HFC�N�:�������
M&��Q��og�*-
�B��#���L&��������Wp�h���L��$I~-U*U��`���P�n!�����,`\��"X,����jC��o�����SVV���or�7n'E�H$=z4��R���\.a2[���8��QYY)�� ..���N��2)))*������������Q*�*�Ja��@VV���
���F�	�����d��x<eee}>�0Y��,6�0.�B=��BEX]]��h���!���555��illgA�BA�2'O���/������p>���(J$�<yR���t:�J����>����}��:��D�2����r����d����_Ka�!�BHx��w��a2�BC	�ddd��b��#�W#U!���0���!�B`Pf a�ai��=0w�\~��i�4�����j�����4�0�� ���otBh������a����[���������������Y7n�l4��hi�$a�"g�����I~��'�������X��%����}�rr�lt�$A�h�o3B�
�
(;��
��V?h�{���9�k���2��&Q���B�T0�����B�UCC��M�V�Z��<���MMM+W�\�x�������A�y%e�����"�����c�qz�N��.ok��.Nl��r�.r%���a
&w��hq�n����6Z��-.�M���-��+�j�
X��-�����6�0H(I�,=W1!K6:[>v�2+�,����'������OK���
�����K\��o�z�z�����yA!$4�:�B����kcbb/^�k�.RRRR�h�"�j/���B��}�#V2Zj�"�Yl4�bL���h��"�Icc�&����(5&�����l��r6�k�������r����9�b8�VR���������0�PL�4}�<s�<s�|l�<;]�J�/�K��xq��Ts���ok	X[��@%f�L�
jKK����������������i���d��a_�@��+eF��d0.���PPP@Q��Wu:�K�.����1c�T*-**����B���T���
��F�D��0�jS���v7�ehb(Fj�T���v���t��n���+�&�2,���A��;�`���B�#=�����UL������������$=��^������}�Mg�p�M����p�����P�A��`Y������@�:����������E1�"�bbE&��h�E��dCG�t>N�����J�8���so�s����#�������Yv��A:�����mG���Y62`�tE�:"�*��@7�p��8��q��yV�|�xyfh���ZV���1����t���������8�m-�38�",�:2*��a���`�%�����r0(3��&��eB(�z��E@�������WG�^������/�9H��qdCN���NFK��\�h��N/��(����K�3Z-#�����������2�(.A��7r��]��r�A:mit��H)���$=Q3$R�+�����$�`�dc��c2e�����OY�bB�5u��*4]�t\�g�\u����3+h���'���������n7��S��BP�����g����nBW(�.�bV��q���/59��
D�Bh�+��)as����&Q��V�� |�c����o��5X�MS����g����K�?k%��g�����PXXXRR��I]A�+X���J(��Qi����iU�&6Q��Z��25�R3*5��0j�!�q����D�����^[����C�����f�4�t��:��:�:�Y��q���Hc��c��3�������Jou��l���i��"wI�����F��9s�n���;�9��.�����	������"5�"�H�������?���j
4� ����g�B,���0(3�����b�B���V
H{Bh�����en�N�_��RW#\j|uCh�/�m�75����N�3��%�@ �e2Jvy��{�tDmcp���A:"-�l���5uGJIL��( �Wot=.��bU��'���x<��8X	���������/��4g�yzc���_H����;��lk��M'[�]QU�p8��NG�����	E!���m�����p�8���	qB����`H!!aPf )��A*Q;�\����B��������X�Mu������HF�4��K���I�'+:����p�.p7 ������qcp��@��;+��H�N���jRAE+u�V����J�g����RH�����A(Dc�Y�)bA����s��+���v�=�>S�:{�}���;��p��������	�)�e�JO���,$ ����t7��t�dj�����{5��^��eTF�����rq�B���@��:&i`P!��'�D��5�zK�9\�����[(L���NSd��������}�jK������������]�H/�h��e_�H:�q��9��q�����i����%t]E�����O���s�*�����8CW�s�rrrr�s��j�O��E:��1=���.o0Z�MV������R��\�����tPF{�t��n��]/g|���i� t��'1!$$����� N}��V�����!������-i�����{����E�����m6
BI+��8cFK�������\�5�6lG/���9�$�����Rqa�>$���\0����9rZ����@����}���t�}.���#.���z�Cx��:N���\*���Hi��n�����]'g�t�����Eh�
���s�=��_�z���w������K�J IDAT?�����O����~����l���lo�D[g�����q��Z�+dKB(z��7Ec99
�U��^N~:���Gn���W����5���z��Z�nw]c]8o��jV8�	�Y�:K6&[>6K6z�b�Qd����-�����5��u���'�O�dh��������	FC�Z�&����q^�����������@��.}��e��\���=�X�����z��]��b�466��7o��y�'O^�vm��@��HH�.��Bh8Y��F6.�,I�4�/��gE��u��Y������m������6�+����%���H�������u��y����I��z�%))�e�jKu������$2�Y.N�����f�D�,�$�j#���JUP�����K� z�W(�`�'X��0����1�L[�l�8�_.�o���`0<����=�\ �6m���{��Y�m��}���YG�\���=�������O�v9�!���)��������7��d����~���/�Lt���|��W?)Ba�e�Q�:�
9��M�a����S��\0�]g+q����r�8^!N�����QRM�T�*�$2*���
�_F�e��K���5��(�;���M~{��Q�i)�����2[���H������`��1$�2Em��)66���l�����-Z����o��g�y��E:p��O<���/�^��\��#���PRn���������\�����%!%+rY�v����k)��iXR����kU#4y���S �O!C`g��e�Z���3PM<d2YJJ������F�������v�H$�J�������
��
���/	��
�]^@Dj5HA��\�L&�������R��a�>L�D���D=-M���p��d#gd�(�%3�8ZG+��\�*���?���T1���.p��iWC�YpV�Z���s��j���gPA@�`mkI�d
vXF���LLL�������^�z���O=�TNNNHPf��y�}�v~v������������i:�SG����T���qq���b�n�veB�L?�27i�}s��yc�����woX�������h���---K�_�����t�S���r�H$�\M��<�h,��-��{KN{����:��|����aY��I�e��~?�,+��r��ju�#0�DL�d�8)+��i�D�"Q���	:V!���������;��]h��v���&��9���9.z��n[��Q��W�_��b�?J�B!p�N������4i���������s�����!����@�����������t�D&�:Q�����e�.��K���+HI�n��K��!%�a2����H��#�B�,�w����e��:B(��~��fknn�������N�����:�N�su�Pk�IW�)W�)���\E�Z*N�T�,+��1�`��.�	�:JHH����������$"�L5B�!V'0j�����_�h
�
{���x���;���:���m=�l>������������y������CZ��s����}��]�x�o��j��nBW�!�Q*�������Z�b���HHH��������������x��N��4�{�A���eU-rA�<G%��^�%!$�>��OU�����i:�����f����1�Bh��1��e��3s�i�V����aD=.C0�L�H�E"����L�*S���q�����r��]4���'�Zm~��g���+��rW�Y���g��m�d;L2����WB�������c��Y�lYEEE�W)�"���������S�N�X���\"�������c�&&v�L�H$�1c��5*�k��-m�2�k��%��E?�m?����������SE���1c�9W��d����J�i���*�E�����%K�p:`&;;[�O�<�JNN������9�X,��������A�����(��01%~"�������~�w����?t�u���@!%��/�[�J�Bry��v1(���`��V��2*$�������������
>�%����Z��^��k�
� ���
�,_���{�}��7���N+�4-�������D��t�p�dgg�)N����t:�M��!_�{D���)I�D"��3�s)�����=�.��E�<\��S�?�p4M��#�������V��e��c����$�x~a�
�T��'/��e�Qk BE^��r
���W�����p�}nzQ[�E�@v��$�:�Q�r��Z���G5<[����&��N���
Os���S1����!�IKK��aCII��5k��,��h�j�!at:����S��������VkAAA�-���JNN.//?w�\8W�w�I��]Y}���O`�&5+���m���o����U���������p���\s�T*=|��0�M&M�uuu�K��L�6������C,R������[[[8WzzzFFFMM���'8�H$5j��3g8L�2E��}��w�.]��L&f���~���Zm^�B�SR�{�����{.������!���h����NV�@�b#�������r&C��2%��bB��]��(Q'�BI.���M���u;��F���c�t[�}�K>���a���cv���������B�2A���gk4����O?��+�������?��c{��}�������F�^������p�pHF�p����7n�;����83��d/#��8���24%���l�����J�^�&�>����(�O&�1#���wr�V%���X��j��8Y����
sub�8���GK9�>�G�DC8�n����$�5������B���-?���#(D4!�"����$"�Q����e$��l0���e@%ft�PD+Z
0bFs�PESb�(��Q�B1��(�\.�F�#2��\��n�pa�,�]~G��^�i���/��5��]BHxC&(C�1b��!�$4SUU555������\�XL�E.��?���[M6Z�-��:y6�M�.�*(�BCZ�q��cn-�y5�����X������C�H+�x"�gd;d���,�o�
�VZ�(��Up���Th��^�Bk�~�jK�N�4),KCjR@DkIg��:)���,6�6��5>{��Z����l�>G��V����u�bVx�� 5d�2o���[o�R�m��%K�,X�`����d��=7�p���s�j�M���h���O�&�S'�Z&��]������~�e��)�a��MB���B-�I/e���C����9R���
���E�7R_�Ews���o����_6�~�BG���Wdt+��t�};>���F �	�m>{��v�g���_"�]�����c-���h��5d�2a��u�����/_��/���EQ$
�����m
q"�f���pI��[>�k�-��S�
�*���e�j��a��$"�����*��Vo�:z��7�#��xq,�����WU����`
F�hXm^5C��v:Lf8���]P��W�` ����Vy���7���_���vI����T`D!4t
��LCC��M�V�Z��<���MMM+W�\�x���5���mr���-`��di=����.9I�.��B�GE���W�a��&9�(�A1x�L�^�����m1�������5����'W����/��{���U�K�Z���i���?2��aaV'g�iZ���iZ�O��
��DQ���$������C�0B��t�����~�L�b��j<��������0�>B��3��2�v���������������,Z����M8u�J!I)�V@d!Xm�xd�JR��[x��1aZ��bbb&N��c5n�	&D�=2#%��\����_�`��U]������'��F�{��l���f�2��A�S���|����y����h4����T*�9#�I@ff����t			�E�R�L&������{��
��S�������3L�T�$�KF��.Rz�,��k�Y�AO[a�R	EQ,�������� �Ao{MrD�H!��.��g7�>�����_�o-��B
C;(�t����N�s������3f��J�EEE�}�����m��������!�[>��2ZY�Z�a����6B��e�X
{��P(�������Q)3������]��B��\\<k��-F'v18�������[��}�'g��};iFF��l��l�|�GQ�����;F���6�J�T*/^�XRR"��t:��l�\0z�h8u�����t���n����N�s%&&&$$�����Z
������w����l�9q�����&%%�,[]]�s�~X����M�,��B�3��2]aY��������u�����,s��i �&�@�v���WmB�O�r
������~|I�_~����W?��c��~{�y+�@	���xq�����v���w����V��RPF�q��9n�{����"��gPf�#��:�!q����U�f.)L�-��Bhh1*�\������k���v����k�����:O�@�������gt�$��Z[�jIj���w>��S�B���
��NG��U�|�m��fI�L�2�����!�\t��a�`a����A
'?4�z�n>���#34��4��VH8��V~���B]A0(30���N����U-�%����D	���kB��Q�O6B��\b�F���f�����p�C�(�^Iz��v����}d���m�Y�;�:�B�e�S�����W��{n��]�&!�P�$��:���K��;�}� �pg��![�B����gc�&
���>#��/������u�6�B�
e�L�C����2��q����4%��!4X�>���N$.������7�./�
SR�&atQoBE�<�
w��i����X	WN���������!��0(3\�������I5#j-B��t�_�o�O������&%��F!�Pti�����<���&�="C���?�;���
!�����/
RlnUhv�����I�u���Q!����/}���n*��P�&�(Q�n�A1��gWU6o��}.D+B(
�'>�$M��D��IR��
����,��Bh����]������w�j��}��g�q�N��RX�s��B�aTN�*�y��������/O��������x��"�P4�VO�t����i��R�_�>~���kB�A
�/
��������c��#5��QlBE���W{�C2g@��xq��\�J2bb���jBE����#���U�=Z�^��������Bu	�2@.N�j�L�������5�B���KOY������W�U3�=!�O'��hV����B��7���8PMB!4$�����$Z����,K&1��Z�^.��[PT�b��B}DS�;�J����^U[���kX>�Q��&%>[5u\�|`=/<p��}�@=5�O�t!�;W)'��_��rT�B�������I!��
)3�+��X������dRN����6B
 �|C�}���\U���8\���\�X������mh=PR�7nW#3>�7��!��KBI����t�&���S�
v���!�z�A������-@CQL����mBu��������I�����<M��2������X�cO���q����7!���g3?ymZr��)9w��	��B!�
���e�;��0~,>:8����;,����`r�j[�
��V!4��(����^���S���G�`��P�pU�te�U��?�z�b/-�6����2!�� u���~{�5^�=$#9_Y�%���
!���A�)��� P�aB^=�a�:���\VY�g0%jof(i���B}AF�8y�,���5�{*�rdc�c�^2O{���Z��e�M��^>!+���n0B�����
���S��1T[�56�
`�B
-��\�@6�<�M�|^��5��MB�eD�����q��H��4��|�a��}���d;��5�����X��?BA����*nw|��:yvt�B}�������I{D&������A�k[�Bh�������o rk��!'�kD����a�\a����{�����B��Q9���]���M�#��\I��?s��)Fl��_�z��!n���SS����A�!4��R~���ONi������L�D!���Kb���i�L�P(z�)�@,w_Y�A6�N���W1 �Dsb�����+�o�L%i������t�c2����pE@�E���b���9�L&!?L���R�`�$�D"aNG��I�D"���ry��P�:f��d�K��a�Q���x�z�S������)���A{��j�����R��s�cz�L�+�!����x_��w+x[����Z{��
!���A��d�������kj4��t�W�c2��[���
r�!�T>va��g���_��q�
qJ����T�H$
�aA�l���k�Z�N�T*8�ZNQ�`�D"����Hh0@�R	suE	�I����l�H$};BLLLD[t�3*��P���	�i~�����2"�I�s{����
�+��V&j��LL|�����)�n�B()�+���G��Ij
,���;�7w�.�B�+��������9u�T�5�r��`hhh���>]z�&t�qA��k��x2O����%OX/�<����y��YZZZ8
��Q�F�D����744p����a.^�(���z��Q����`ffffuu��&����������a�N,�?^�O2>>^�PTUU]�p�oG����h��pT�2���/D���;8�f=�z����
�2S[��<mMHP���\|tf�����VLM��y�u��,Bh`%oM�s�K�~gv_�_�
g~2��B!4�aN�)�m9e�4���LPdE�0�zj��s�I:L+��`�v���������o��i�m���������[o���444|��Gw�}7�<8���#�"c+/�����m�t��l��W�=E�$�������}�e���f�����!����0��m����7�7�����>^��4�
C!4�aPFh���~/�;��/���
Y����}
��U2>�GhP�v��v@kk+�6j�(��l!��N'w�������#��s�^�oll�7o�������?
�u�V��2�
>����2�G7����n��5��|�hn?Z�K
Ir�B�K���|���~�����~k����?-���!��0(#0J.�Z��H�?|����&�b�����wV[?b�@�Q��U���"�����`0>|��p<������Q�<��`�y��w�
r�|������'����;v����].��5kf������r
K��}�\U�����:V������v���H�i!u\���U��v��fJ����!����q��8������O�m���U��8�!�P?`PFP2���� �|����h�}^�yP��o���v��7G�������<k������$��L&KJJ*--
]�e��E)))���{��gH�<���z�j��wF����Le�
�j_Mm�����_c�~�M��i����Z���}��&j����d�B�'f�������D�����-����)�\��!�0(#(n���2O���n����0d�F'���o����`2+��)iT[������^x���>��q#)IKK�(������k��y�}�v�m��o����'�Z%�)&��.<������]���R�R�_n���-Qu�����?������,��M�z���>:x�`�/���*Ru�"�jf���[����4���>��{�����7�m!���A}�?�pY~�����lnk�	n7HA��L�iexA�TQWE�����(j��
R��G?�^!	eJKK�-[�q��]�v=��3�]w�������W_�+++���
Y�|��+&2����R��X8�8��������fd��Iwu���^8^�nW*��O�KZ��c���7�xc�/���*Ru�"�j���{���/)G���1��*����_�6�v9�!�
e�eZyA���fO�������\��aS����S��j�k��P�P_��3���o��a�� IDAT���#G�#�<��{�=����r�O<�w����zK�P�:			P_:������� fTN	���k/�o�����n���������y�Z�:��t���z�>n7U�,E���mFh���d������E"Q�W��c�:C�I9m��/�%��5�Z]p��5/�v���!�V:�#��Gqy�Ro=lY�%`�K�<~)�r���	]&����Xs�2%Z�
0�B�M����N���>�/'#eX�]�j�'�|�����?���{��t����?NQ	�477����	���������������c��fyyy���$���F`y���|L�4U���x:}#[�����qt���Ru:Vk�?���(h�v4}��TK���T�j���p>��0�yyy�g���� %%�\f�I$�\.��8999>�O��T�@ ���"���J%�����Z��s����-�7o^�bE73�y���0�c���/�^�z��}�3$L����[)0*�����~��_��"��s.8�*Z_����!����AA����M�>��������=(s9�o�$9M�r�S�U�j����)��� �{@
��v���������:~��C���;n�����o���8p����k��VWW'���c������/�|:�n��e���h$��0��r�����xs������Y�������������������KT7t^�i'Xn'�"��y2�z�
Ed��{4Qe2�L&����\py��$�H��k�Ru����������dZ�pa�
]��z���I�`0�:�m��(&#�
���<���k������a���3N����.���BDB]	0(#(2R���^����
�\I��=������=��v�_{-F���0�[�6l]�y��-+<x�����5k����w��m�X�F�V�
,C��w�������op����EEE=�P$�=��&�Y"���^B�.t��z���v��z;��Kg��l��r�.��~���gG1��T����/6���~��a0���GeeegG�������:aF�����T���Z2x*��R�R��\������$
m�&���
p.�Z�����x����|�����p)]���:
�t��*==�`0X,�H��m��w�h�[��Z
��)*~DfA���'�~[^��k��D'�@!���2�"9e<������5����R��P����g����	����������,�Ah0����4i��_~YQQ�[N�>=k���c��������h4�����I�[SS���O���{�B� A��'OF*����?&�����������h9���a�X:N���J�?f%-�PT�aN���tZ�R������]S`���q�@�$������8���G�E�D��'O
3@���T������aNg6��9\�;�+�_jj����%������n��?������n�X������[,�H��
�J%��z��� 66V�����od�P_j:�V{~{�A�l�N����y:�a ##���u��j0�=��?2�S$M���O)LF�Q�P�5��`0��:�abbb�$��n��}^[ !!ax,����0(#(�8��,��v�9o5���G��i�Z��8p�:4�LU����Bh�X�jl��)���(r����C^"^jkk���&;;;##����� �G�ae�Q9�����0����,��-�����7�>��R����>������f�������� �JI���?��i�/�+M8y�N�:�:#G�3f)9rdyyy���;%�|=������b^&����g<4�R����#������C
6"��DZ�Ab%����$h(a�i����>��d2aP�>���pZ& �f����t[�K_5���A?E��D����2������3���~���Y�OQ�?;,�������/����v�
y�l6���666&$$��Q����N��={��p�
s�����O�:��M�h4�����l�\G_�S��/MV��<�����>9�J��W��-V���[���������$�"�<���6=T���z��}o=BW$��{�c�:�����|��p���BObcc�juSS�s���s����!�����"I�mP������?
��'���4M_�x��$��E�M3_O&�%%%���;��yz���p�#<&&F���l��C���a�N���(�� 99Y*�����y�bkk�0�yf0(#�8��b���K�v������T&3�=J:�fg_~.~��j~P��o�GdB����S5����;�����=zt���k��y����e��M�<����$(�u�����/_���^ cg(�Z�f
l��Q�K�-*F����2�������?���QG����d�<�������@���s�����|KQP@OM��Q��@���k@�Jz�c�:�B��q���pfrM�:U�V_�t�����oE.[���x1@M������L�����5�@d�����H$���"a�6%%%�,[]]-���FcRR���;r$�������]QQ!������^��� ��I��������p.����J����/^��F��A��@�i�� 	e�"p��^247y��'��LGd�)��U��gEn'�]��!���3�Z�u��u�`�������.\�d���������N0|���������M�6���~����r��W_���c���.\x��������FSj�W�^�_����^�~���m�.U;���������������^�!$�N�Y�<V��3h��e�;�!2di���~T��HEdB!>��$��h�3/��r����L��J��g�a��0.�������o����/��b���


>�����m��=������3g�<x� Wm���;v������k��_}�}������7O�1�}C�p&Yz�P��R���|��5O_�M��~c��O���y�2�W�C�pIFF�0$�U��Z]Ed�KwA[.�������@mB!t����p����e�U�Y��=�'?(c��p���a�<�lc���i���E�����
;w�1b�~��_��?��Og��1n����8���K�����]�v��u,?~|qq����HHB��f�����']�[r�
�/%�N��H����?�<X�
��?�(fZ��4%�]����������/�c��������������k&�JB ,	[DED��^���Uhk�������.�Z����YDTDE�!a�@!�}2�e�����q�;�d�Yx�=g�{��I�w�yO�b��0�j���t��B���I��������M5����3h�#��J$�V�
���Ah8s�\�7o~����|�����^wBe������^{��7m��=>�B/���b��{��}L�VSj����d������6��ZS���Z2nl��zG�.e�W���|K�.MJJb{z���T���s�L�SH���y�o:|���!�baR&v��4pU����<��k��j*��Yp�;f��`�!4��H�$��+	��I��\����������x�@��O���_;<�b�c�+!G��]���c��!$#��^�;7�~0!�.#l�$����[o��������8q���?~��7C>L���y������4MEE��5k>����b���D;S&x�R���<()C{����:�m��M��U��B>���K�=6B(f��I.��k�)fl�v��-�q��	����/�O����?���,��>0;w�$	�(�����-_�\��-Z�h���lOMM�����XE*f�ZA@��`�\F&85�Eu�=�Te�Bh�FRR� �O?�t��PSSc�Z'M�4m���o���+�`�/�Y�fm��Q���L&��<�����O�4i��������8�0���W���P
���W{5��9�8����'/�����d����4�;z�T��G�6�����$BE�^6�TPf��'��=���]�e<f9�M�;�m&L}�u������/Nw�����6e�"�s�;��F�SYYI��8�u�JJJf��!��;�u����"3l)�������������P1B���HZ��d��455�����������1�����$�����Z��O?���TPPPVV�t:�-[6s���c����C�^����M�ANc�4�M�h�up������0�����|L���2B�K��v~A����}�A_3xol�"Y3����+���m����4�w��=
O8u�"3I���C��������&�CB!t�II��K��/������=�O�~��g`���l���322v���r�J��`��]+V��G}4�����<B�vI�c�V�y[�7`����s�+3z^�������qt��Raz��B�P�����<��>���Ct�lq�Np����?��3�mx�k���`�
��_������g~��vBE�HJ����;O�<	999ls����~����0��������$�0c"N"L����fY�Ok�%��
��R�V�A�9C>U����������������Et�!�;1�@�$sM��)���.{����0����Z��Y����8���lW/���d�P���V|��4�s���`B]�FRR���n7n����?N��oT���	��o�9{�l]]�T*�j�a�D\�*�=���3e�\�EAJJ��Jz^�l�h���k����Db�!�����2%�^�(�U���}�(W7�L,[6�){�>�	Z��B"	�({b_�2%��B�K�H*�{�X���P(�h4s��y���}>��/��������Z��h4���$''�L�pb�N�T����t:�N������Bh���c?l���K��D�P06���9tlsi��n��H P��<��, ���]V�J���� �rss�W/��r9I�QJ�������z��E�N�S(n�;������hb��H����1{%e2ddd�T��]���E���Ow���e,����f�W��V���e����17U����~Nqz[+Y��*�+�"�����l�����"�B12��2�\n�Z�c��r�5����@�2���9�������92`�^����c;;::���������!H��<�[/Y�n-U�P��ku�K���Jk:N'X�������v�����s�S��������V�����9��eeeE���I�EE�_��ILL���@�T����8�������:�sU*U_���������a���WQ5������0���A��>�����O���aa^!���������U	�� �BC4"�2E�_�^�V�;611����M7�t��1�$%	������V����������;���;����!k�z����R�:;;��^�Ib�"@����_G����+3�j�����f�m�(e��M'��>��T*�D��l�B[��p�����:������x���.�+����T*%I�f���^"�(==��������j�������F���l��������<�,�����bq[[���PZ�6��
� c,����,C/(s������&u��l3a�-�W����&�i<Th"!�B���I���x�b
��>���O>��������|>�������H�)j��F#EQ�w�l�����{�Ld���*��������=��+��@SL1������v�^���>=k{������+���)@ >��a�2�,;;���#�����N�_�X��8{B�f�~�a�)55U"�?~��=K�SRRx<^CC�"E�����S�W9"
���,�XL�3f�N�3���yv�`��q������@�V����������]!�S�.&Jq�+I�5������4c�k\R�'�h��j��N_����]2�����:�j��TBv��,�B(�FR���<�SO=U__���SZZ
mmm��hB"��$---a�D�D����@���gdp�_�2�t�{��^7�fn���\���r�cF��$h.s��R�`��#����}�l	��
e�`k���>�3�[��0CC�x��N�^_����_�R�1>���*!�O�G�B�"7b�2����W�^�bEH?M�'N��s�x�)�"K ��P�G���8� 5d�%h����'|g��u�V���NoKM{`Y�|f�r���B}����	������':����������0F�3���0��NT\q]�������<2�C�X�Q?���4���{-� ��"�B�3b�2N����n�����*gzz:�����-[`������OW*�������pb"�G�E|�y3el����{{�>��n�Vl(N��H��y}n4s����u&��@��/1Bhe�r��6*���2���>{`Ii_{c��"�?{,����6*�d���'����#IBP`x����35�Dv���$���)\S�a
�|��e{���B�R5b��744���&%%��'?	��7o^QQ��l������W�|��K�&%%�A,[��y�_� ����R���=9_�����x��w,N(����k��,���n��:��^��Iq�f�PF�B=�I)��f��w	v�#\����M{�����b}~��w:|y����.����T�:+��+s?��"5T�������(��S����I�B17b�2���������^���o����^zi�����������������^bb�g�}v��7��5��w�]�hQ}}���k�K�YRA�G��R�eX.c`�8����z����J��~�U������5���� Fd9g�������(���2�����7�} ��i]��:��]k�GH��T����+Ny�G��:!tQJ�/th��m^������^
!�PT�����W�����?��c�=��c���mmm�?���u�������t�E������Y�p��n�����_�WH1�vj��	�U�Wu���j8����P�Z���=��}��m;��|i���R<z�����F�� �$���9��!A�.�;8})^Ks��u���lS7�����^���c��N��=%�oj���GJ�����,���p�e[$�����;�>*�j����^���6&�B������W_}u��U�^{mff��n?~�xEE��z^I6���x�����3f�D�c��m�����\hLI���*��6���Gk�:B��WY�Ei��x��jrH��;��~��K��:���]M��
��E������������z:�K�,�������i�4�����$s��v��K��D
������
�:F��-�G'<4!uep�_�x�Uy_��\����������L�A��r�Q���`��~!���FXR�����Y��0��������NL�H�	��P
�����}3����k3e�'��}k�IM	W(fl�|���J�BvrM0��n]9=������7��'��d1����dS�
�pM���=���l�P��}�=��i�{�����u������������s����o0�i9B�T�u��@sz��3�r
�2!��`����
���R��������v�L�
���#��\�������`��U����:����@�f\�
O!4dJ}�OH�O1F��'xj�H��=(Wsx�v�����
6O}p���.Mu^A�N6��SB#^R�-^i��n�}�s����r!��`����
7@�"�=�g�9w�;�����MF����������zM�����&�P������1������[����~���x�q1<�a���{�\�����`��P���zA$I�3�~X���n���f�����-=�H���_���/�o/0<V����Z�Z����N��y��	/�_�� �������e�/f�H���� ��;F��2L��%��
Bl��.$!��*�*
if���:v��F!�����K����
�V��g��u\R�'�8EQ�?o���a��e�t�+�>�-J�um��.�q�O!4��t�	&�����?~��n$.��M�����8E��7��S�H��k�T����9�v�:G IDAT'\~t�����j�v���|V��	f�@�����-2����%d�����a�kH�J%����d���.������B��;p\������.r��b�P^L�F�]��9����v��b��L� ��L��Z�H'�V���q�>{�;H��t��\a�Z�0�`�3���`�EI��k||�0f#��r�P��L������RiNN���op3e&�
���.}��}�������g2�:;;q�`��3�F��H��I9�����y/��`�X�y)�T|���er�b�!pG����y�++�>� �C��I����O�Q�pB.��d��������N�V���F�
G���l1�]ff���jk��r��������u����������.e)�}�9������B���mX!�.aXS&����p?�����^?guq���g��Kl���r<��������:����Yr#�����h~�#Ra� ��9LI��0�]�O�����������2����9L���5�n�K���_�3�k?;���zH��D��,M�O�n~
B(J��7�A�H�ev���G���B���I���22��M=wMb}QZ������|��0��q��\a�Jxs�e�8�6m�+�W��<�7�!)���?�s>�y�?L[ t�2���)fl�1�.[��^
f��87��H������=TWe��/�O1�+��IBP`xtA��T��
#��K.�b�3���I�X)���}��B�R�I�(b���`s4��������\��e�a�@���
������j�
� A0���x;��k�'sz1O�_����&�:!���k��	&��b���8������mG���sM���<A���������{�<��,��rQ�������\&���\��p���g�P�im����m_� �B��o��%d�R0�S�X�������|nO�i�)N(�����������V ���U ��F0u�ss���}�����p��/�{���w���-���M�B1@�<az����a�mL��p#(xol�2E]��?�C�]k��gG���<��Zp�����gIB�[#��'9�F�*��f����q�q4ncB!t��B�qs{	����5m������H���A;����<���=��(�i��IR��z�TU �J�q�|!�{&�����_����B1���K��\a�<���r������Es���8h'0�un�Xm�2^�2>��t�����{�A�41%�D�4��v|�u{�z�b��4e#
��'����������S"�����e���D�^NoKy��������7���������g��7�mx���#�"�G�d�.���[�d���}e���X��F!�z�I���g��zn��gj�o�?v��M����Px#{������;/�O��������w��u��u�D:�f�T?E�� !�C��Z������G�=��B�U�8$Ac�c�:� <���13�S_M��S�8����pO�v;���v��q�l'-�	��u��]N&���v��q��:w�s����Sz{���<�kO�5�����k�^���J+
/S��_bYn����(`���o�����eL��M�7Fqm�����/L�eq��<R��k$�����
_T��qxb��Bh(��\I���.��M�A!o�����U�,�on��I���\�~��2���	�@�0�Z~x&��l��v�Q��}K&�98As�,u*u7�����������!�<";%u�����1�c������k1)���o\���2%�u�3�JA`+�N_wL�MY�����h;���G,�t3�/�A`hR5���m_�g1�tU��1>+�t3�!�I�\t ��MS������+m�����V:�����w��z�'R�MC���?�=�w�����t����F�j^�#�
�/(�>�����2�a;B�XhA�`��l���B!��L��<7�j�����i2��v����v�q�<�R�L�t7��W�i��������J�s�}����q�����>j}�vI��dx��E�������������$��2�ui��!�����������<K��������"�����G��
��xID��������G����/�Gd��Zh+�P��d�!����F���������m5���0��lS3vI�WOx��}_�������T�O�x#�����)M5[w{�������B]C�M��D����#�������	!���L���	a������4V��_�,N(�t7zO��b��
������lk���\��Va��@�D�D��OL�J$����F�F��<0)���I����_�N�,c��2�8�!����>	O���@������������(���IO���IK��:��������	!wra�gqM��)������B!��L�T� ���������	f���wu{l���l��GA�=6��I�Jc���B����;��\R�~/�#������( �������M�+m#�����JF>���N�c��0��]��Ot	�	!�"��1�����5HH�^�#I�k�����=w��}�\���uD %�"��"�'��X��h����������p�m����R�?�m&Ly�u�s����Y������m��0%��y�O����������3�� ������vir��O`A�B����:B"���- �
�l��eX.��@R��������p���)��J��K��J��J�<64p������C����s#n7(����A��H�nM�N�`h�t��&_g��m���}.������|�������x����p�.YC���
������E����-^���?r�j����$HO�=$ rR�5E�PJ� II�����b��'$��f�*��$�M5/h�N6�,K��,M�-I0�z�K��e	��K�U��r��C[kf]9�)���b~�O"H)�z?GwW���-���\!3)�v]�9��.���mW���B&eb�J�c�����F�OH���2���W{�2��Xb���r�P�sI����9����i%��3�S&J��@��f�l�����N��m?�+:|�p<�*:;\	"=_��+z�Z(���6����"xuZ�?}�u���Y6�I�����������O�]d��655Y,���K0���B�	GA�#����3Gw�w�8qb�`����E���+�Dr�AH$����h@�S_�s�e���������L���3��&O�R���vX�8�m&���y�������������t���D�?�I����;������):FSxB��"E����������V��O�B��I�����}4z��iRX�d�k�L{u�2�v��1���kS�G�d%:�6�q��$��2}����c����
�:�'��~��4��'P��9�w�zH�b(�m��&���s}n��o7�w�op�����F3w�\���n���?��	N0@'�Ol��d�{����w�GkdA��q��U�'XJJDAi9O&Z���)��"�������������������~3������������c?�Y���l��)SK�q�<RR��L����g�l�*�C���\����)*�N�V�w	!��0�I�H$�������#5
�t������3b|:�`��@����(�$��1_�P4��qw�z��Ts�\������N��z"�
$xt�3���!��������|��[)�xj�k�j��A��@qY�G�
|����X�TP]����.�u�t�L���q�H$`���'B��wu����^Zeh@�T�~[FAAD�^.8����a����9.3X��
��P�H���Y��2��0��?�%�d��lT�&^�1�vn��������?
���
Q���/��7�9��#N�����(����@Y��N�V�w�B!�2�x<���MMMFj�Z�Ng����	 5�@BB�e����)����8�|�g�z�&���sM�i�*�MG��N�?�3��=����{�UG='�mk2U��
��B"E`������\�i6��6tC�g��p�
R��	��H��[��	t<��'J 	~f��2 e �c�����)o�j�<f�m�y;h���t��,��o�����=�0edd�L&��1p��I$���d��������������K���eee���������=
6�Frq�������w�.{�m�
n�W���0�����3�k'�>��;/���Zp��+���X��1L,�!NR���A3��=��5qb�'�B��I�Pe���)���x��vyjJ�ak��t�&��XN9�����s��R�0����>f�����U�����?�*v�+[���{�X���|H)�Wv��<v��-/e������%��m�����@O��w�	B
_���
|��/2��z�*M!������D��q��W�|	Y������m��j�g���6���9�Sn('��`h�����r1����>�wQ�p8.������n�z����@ �����FN�y?A\�/J�{?-;O

���
X����w��8�O����8������;�t�wr�_����O�&�>��Y����f{EG�P��|�R�N.��P(�XNp?��L"#�k�(m��w��jM���D"��xC�W���R�0C�3b��a��<;�B$I�����x
���
$��b1�D��<;�@@D,_I������J�[U 4���.!OM�H�4x��g���U���2V��f��C!�pG���{��P�����_�)�'�=���������6�,'�))$���@�c���"=_��'xC}C�5���M��Y���\4{@��D%���v����h7
�����]���H7C�����E�n&�R4�?��2T��G��9"��=}.�g��Y�5���f���fP��&��U�//��4uU���m
F/�k��`�5uoi�~76��EI���N�J'�-�U���������]��|�Z����F��~�Ri8��H.��]*6{�i�U���W���|>��
I��R�b�J`omr��������T*{�h�J� �b��H���+���@�P�|��{��h��#B��`R&��
TNE��;|�v�!�����/��H��m�}�;��a���x��7Z�D�9&����w����c^��P��^RW	B��4�^ ��E�H��"=_���,�� ���M��	c�9�,��+�g����
S�4�c�g;��bh���k��������E�|9�t<���"����,M{��K1@{�R�0�k��y����6�r{I���U����X���!r�f4��wH,�vl�]����o�(���	 G'<�����MO�����U)Ghdr�\g�����0R���t����p���6��f4����a��>��P(����x���>utrk�����0Ll2�������b�b�3����f���^��p���g'�
c�J�5J*������������/�.2���.�,
��
&8����4�
�4�-���7S�i����ao	�l~����lC�S%�z�t��chp�������	!��D	�&�o��/��!�o'��,��(�=U����Qac(�g��B4������)�V~G�u"�%#�wD��32l@��2�3;��{ei��f�n�>����Y�'������~J����4�_��N�|;[w{����G�8B�.b<B�J�m	��2�����32!�PaR&����������6���d��3e�q��7��QE�� ������92��5uoi�nOT\�6��h��i���b\F���G��O��<�A��T�����p4yr�M��|"8��b���?U�����#I$% ���.zfd�����-�'b<�$�~
���T�����c;���!?�a�|�����i�@�
��y���9���������B���2�%�&��i2�f�>�4����m^���"
W�L���M������]��{�#%c���8�p�.>�1>��h����'�y�'"H	���	��H��y��'&H!���������r�x�"��k
��R��,w���rw?w�?@���$�`��ip�)0�s��0�W����{�����1����
�|���^��@��6=7�<�|+���KuW6,;e~oj�[z�4��$�G347W5��L��8���O�r^pA�iF�|m�>^CB!�B`R&���a5x��g��OP�ci��V������P����������<�������/V�l�}qP�<��Wz��N�r3���w�U��5�b�%��(_4f�������{����� �� EA�"���s9#��A(�
��t���OB�_\9�b����?�q�}���_�~�����fNN��O?���]QQ�f��?�0d�u811&O�oZV����I�J���xn���<�O��Y���M�xW �K������rtw���E��q�RA����s�7�=���S�"tQI�/h	�;��v���mO���B��2�%���y^��S��.k#������M'���lK��{5�A���D��ZH 	Aq��?��#��d��Y�n���<�Qn����1���;�%���o������5���AB�'�����H 	8�_J|� �r2//O*�6�6X��9w� ~rss�b��dOw<k���7j�Z��d6����?��I�&-_���bbO�P�?����f�}%e�}�[I�?A����6l0u��7ML�s��>����Z�4���m/m�3���B�_rQ6���-;�j��v����vE@!�bdD��U=;w���bh�8���2O�t:�
�0���U#�[� _�V��=���f�}�����e��N���]�s��\a7W���m��,�����6t5l�:����}3��B�7��c�����trss�n�V�U���;�d$�����Z��O?���TPPPVV�t:�-[6s���cb��	y���a�v�#6e/"|vc���C:���;.<��=g�z���*�<RR����1���s�56�.)�y����9V*�I�����B!�&e��>�LW{ G 6E|�~�0�i��)���-���Ri1H��������'ORT�k�.\����s���+W�a�v�Z�b<�������X?���)����c�md,5�C�`F��-�+��T�U�����:+�c� ��sB�KU�������l+&eB
'���"������<��=��#)T�W��!��~����=e^�5��7&��G��@�,��Mz���&�������?��_��i���p�u��$fL��
��4�J������h���s8O�a����������\�P����/0,���5���B(.x�H�2��5��6/4z��]5qB!�&e���My:.<S� �bmn�Gr��Y�����|�o�Pl�~
[P����K�,y��w6n��r����g�dff�����;��=[WW'�J�Zm�1�'�����%���3����o���[���!�����.Mu^A�N6%��� ���rW��k
if��_�f�
o&��a�a�M����n�:��������L�DK?�dX[��4pw�1�@d4V09�M5����e���#~�.J�F��Gyd��5?���n���+V|��7�V��J�o	RRR�h�}��$''�{���!�dX�7|��dMc�����9����*�Z����4�_-��t�e�K�j�-h��D�OL1��2!���})*�|�N�_`h��\+I�6����ei�S��^O�6'�����
������2��w�}_~�%EQs��y��W�����������A�	�����s;:: 99���#�>|��a$$$L�<y������KKK�yv'/���pn*�*11����^B���~-y�!7��4��^_ZZ�=�7����q�L)�^�������B��*�P(�H$*�*J��Uqq�������r9KP?& IDATEQ1��L&�D2�/�^�������.~����	�&[P��[�qB!�L�DQ��E`�r�+A����#w�����$��c�K��R\����L�7`b�}���W�%?�6u��t����O�q/�.&{����l6l����>������]�v=��co��F[[�D"���	��j��$�cB�u:��~�3��������V0A@��r*�-���[����������D���u*��S�)�.�R�������[�3������"�]�wQ
�C�&����EO.�����~�e�T��|qB!�L�D#
��r/�]�$�c��Rfp�_It�2p���������}'�����snc&�P�>������w���c�e�]6i��M�6�L&�^�R�B&�����F#EQ����l6���\��w�9v��������G��p�����E4���t�\F��cj�~��xB�o��������2�c<bC���\D���8��� �D"�L�N�����"8y�$���6�RIQ��n8t�
Ezz���>u���/����!]�R���A��\�(�p�%����o���{��>}zKK��;�y����/^����l������?���+����N����">)��Tx�����:��2p~�_QB>A���>�v���Ii/�M�8?[{[���h������G/������M�6�����z�F�pa�����@81�0��pTR��M�:th�TB���o�&�%//�d2�\�I	�^���---���!����K�������Ol��m	������i:���a�V:ij�[=k�����P���������]�j��`����>5lR����f�
<t���.����-�JMMMOOw�\����%�?A��<[PR���C0XP�af���+V�`�5MQQ���_.�����%K�u�Z������;��7o�����"5X�7��<5�
cSY�&��]m���w��|�P��!�p��
���k�O�=����]L���\��`lgkk+�K�������,��pbb)��X���/CS��������!x���k9W�qT��������~f�|�x�q���c���J'�#tq��T�u��jH���`�l��UqB�|s���22�~������
�Y�9y�d��7o��f�v��D"y��wy��|\�P�`R&��-�O��G;����&��V���l��Z�@�����qM�037��(�����`�z����B�0�� ���8r�l�����3}�t�RY^^n�X���%�*!;���Ez��C3�oq
��U	�b<�01���?�t�����I����Y��8�Ac���$*.������U��7���8O�T���!4|,����s5�H&�{�[�^&e�B�������Y�f��I��sOqq��?���Y��?��`�����'3g�t:�����d����hyI��~��W���N����Co��f�2�999�V����ioo��y��w�I��3D"YZ���
/�L,_��>����kF��/����W`q����<E�n�������o�>�N�l����%K�L�4���lRf���>�o���IIIlA�)����NL��EX���h�I���f�S�������R ���sijX;O
+��������25��E�
��&��)Qq9I>���OJ~�O���k.p@h���t�gW�<���>��v�SO=��Y?����u����Z��=�@�g�%e^x��O>���k�!���YYY=�PMM��1c��Y�f��������h4f�y�������+��|�pb�N$M	n*<�'-()v�_�+hS�j������k��	�	?����~����4��������7�p��7����o��?��i��d�hoo���?���o�q��Y�����E������]�^'�����d�(8�:�Q��i���l��Mpyb��{���L���b��J'�Mzr���K&t]��������E�B�U��p��6��[XP��#??�=hhh8{�,��{��~���kWp��V��Ch�II�i��=���4M�q��a���)))�V�R��}��@ �D�z�j�V���O'%%���9��e����9��N81!�&7^�������)��
�����J]���NG`7�������8���m�/^���~���������{������f��������7l�0y���7�����{�����?~��/���O�$e��m��agu�gS��q���M�\O������N9�����x�>���Ii������_?������f����E�S���������4��=
�����
!��N�cB��s�����^�T�������h4��#4b�������^x��WV�^���l�x`��Y���������\�paFF���;W�\�n,�k��+V��/y��Gw��	��D�@XWE0 �2vu�����iN�B�[��9���+s?gB��A�>�wDh���O������������r�����}!�@;��������1C$;vl���!���3�����Y6
;���r�Q����}�����0���O�s��c�O[]E���_���RUU���H��H&$+�NR\���� z�_^.�������a|�������>�p������Z�"(��`,�_v�!������fee�x<���M3����������F�bpWl4����LAA������g����F��8qbee����`����[��_��/��u�]G�$M���Dd�Bi )#��?�P��q�I�H��'�<tM�����'*.g�B�����"�'���y������q�a�������!���8��;���
�FfR(WW�����m�&����������{:��:�Hb��Q���:�����KR�NR^���'��O|��D+-)N~��ko�no�lk����6�x�
Q��d@H1�efN�:�v�E"�B�X�p�����~����~�Z�p�J�b'�q�]w��l�>�F���|��O?}�����C�
������	��o8{�l]]�T*�j�a�DO����0����Iw�F/���$]�j�"	u�d?6��b�D�
�f�}S��0��'�J�!���
h�|vc���O��3F��k�"9�����|�s��3l8�����=gh��D�������<���M��9Z�}F�/#	a_�
sl���S��5b��BC�r�����������%K���x���+W�s�J���i��)S���>���I�&��������A#5#i���������9s���^�t:w��)))`4���FcNNNrr��d
'����x�D.��
[�F(r�B���zQ��mE�����p�y>�gk�+�����������p��6��D��l���
n/a6���fL��$��>;�T*� bs;�DR�4R��������c���|�@ ��+���@,��2�,�#���K�cv��v���3�}���4������v��l�H�;�g���s�����E��]W��G��<Rb��HR\���Z+-�+^%.b����F�mW�u[c�g���WQ]"��&�.b��H����0,�P�^|��;��S&����5k�p�4M�d��(��9s��={�;_z��������(II��-Z����$���{���� 6���Z��������92`M����N��KQ�C��rssss�������s�
cSjS��F]?=�g�j$;�O�d����|�+]-����U��`xl��|+5����7��|���_�����hjjjjjj�n����{���'N�N:*���ny�Ll8o�p��p"��u��_s����� #�����U�?g���[P�������mHI��kRU��s�<u��|R�����ls��X��Z�5[��R�������A
g�������������_`K���~����z�+��W^���k���6�y��'_|������I����^x��[n��^z�O��$���r�C76b+e���pb8���...�g�4_�+��U�.���v���&�l,{���������M���B�A�����6)3r��p�kk��r�}���l�$y|�][j��C����{1qz[����?��������},m.l��i�����
��a��,s{	��F
?��[t��immm���


���jH������+V��������������+�����K��x�_��+W��D'N�x������_Q��(�����U*U�D�Z
F�1��y�������}m�2y�����c��<x���q����r/cU>W��?�������/	S�7\�[A�����R���/:�������r������b3Q����;����[�.�������&�%�3�3������3=�1E�@ 7n��}�bp/�3g�^���kW}}����������w�iG���Y���v_\����}��|����,}j��_��w-�����������b~����'�g$�gLL���f�������s��%�#GVW%�����>����l<��y��������0�=Oyyyl~�E(fFR�_�j�;v�x���l6�C=4n�8.#�jkk��6�g������"w���V���2��f�[�1���/3��g�G��USg� 7B
�k�H�4������l9x����Sc�e�n���f&�E�+�������+��X�QL��R��o`�/*n�_X91��d��X
+�WzB!��d$%e$���^VV�k���'���[^o���lJ%///�S deeq��AI���������^�}0E^��������.|�����d�p#�.J��a��)!�PU�������������h���Q��z�m�|Q?g]:�t�q�k���Y{@�M����/t8��
&��JK�$������L��*���2�0+��E����d*7����!�.)#))s������}��gW\qE_[�l�s��
��>}�R�,//g��� ��GTS�vz0�d�g7�\�` ��(��O�E���6L���&p�lA��)g��)�\�3?�Z�����r���%��[�=>��b�����-�����9���������a�����p���cOM�|;Ss���b�����e�_��B��H�������������W����K�.}��Z[[� �e���;��~�����r^
/�"��{��}�����<s{L�b��MO�3����M���;�P�I
������oO�S�N�����6rd�ce.�W��������#��}��]��a�j��[2�������
�������74�P��j�����u�C��\��Lh�1I�\�n��5=��{�]�~}{{�{��w�}�}��g�=�\GG��w��h������k���������A����?)3��2�2$e�?Sf�5JW�n���Oe�r���� �bF�������EVP&������o���n��n�@��?��������$��I��SU�I����Q�3��5�#G%�d�K����]<���@ P*���;I���"�R0b�2l�P�z���?�|�r�N�h���7�=555.��U��"�PO�.�������Pf�j��f�����KSTs�7>Qkz?�D�" P$b)����q\�I�:����{�n^��w5]�O)��a���������t�=�����4�����,�{u�<p���)�V�c�e�������=�n��M��d��~�� �B�1I�#G�DX{0;��������1C$;vl���!+���"�4
����2Vn�L��o�N�H�I�<z���_�W��L�|;K{��3�Y��QB(6�	���L�.��K��	R�y��p����G����W���bs�7�v����&���W���W��z
f�k�%�@����Vo������s&��F#�,y�C;?�!Q��� �B�1I��0Leeeee�c�B(K	n*��Uj���t�}��]�� �����}��Z�VW�J.�H$��mkoo�&��8������?{������3�]�,��ncl�
�CC�Pl���	�	�����n�&Y�H��%�4��`Z0`S
���^U�>3�c$!K�,[��}��w��{F�G���(;-��������$qY�@ <�u�%��H%�*5���I^�6�4����u�������������U]*=0�!X����RCD��t��@p����@�/�P�T-W2�*����0h�����=-�@ �=��S�y��F�#�����~���h�+��AMc	�&' �s���x�������"6
6��i�A�FJ�8[������3��'���Y~��X}�������'$�=7V����Vuu��"���a~����$�A���	]�:�Ru�F8�����.2�9� �3��y
&W����i��*$���enq��8T�Q����?W���?PG\�8%!{P��t�W�!��&��{B���T����5&t���K}eOw�J|����7��t�hnA�����x:�,J���>9��W��������z�fD��&a���dJ��,A �� Q�S0�r��E{t�\��_�2@�n�w��������H
Z1-1/H8�W�!�`�����E������;�'�kicX���dC���n��TS���\���WC���������	�a�530B2gD��9������a�(��1�G��a���v������������ddd,X��eR?W�x
�� �K���<����x��~9>��1�5��q{V�d����8�s�����n{��W�zZ�@ \���������������g�^Ig,2>c=ah����O�����{ �w��~����	S��f�9���0�N�N
Za"�U����+5J�lv�"66�J%�`�w�Fc�=ztzz�T*�������:u���S���_y��6��l� &�@tZ�����s�<td�\�l�X�a����)i��/�wA��c��h�����%���2�2yX=��T�$�����L���d����^�h��h���[%�����n��h�r�����s�dII�N� Yk4A�x���o�W������o���Q��]bcc�z�T*�{�P�\�����R�������F���j�-[6j�(��x,W>9�;x�����t�u��\��?���@ D� Q�S�8"�:|	�1��-���\��#����|�~.3xL��q��y�P_��@ \���'��x��^#���I��$K��h����Qs��{?�C�zj}QA��SEsw\�?xs����kr������d���7w@���#IA+�����t*8NXXXaa!�;,�����������S���=�j�*x���]�ex�PB�77�&����>1�@ ���DOA�<td}]!i������k�*�7^����0v�x����8�bt��@tZ82�(��i$\����%3$��Z�������Y�O�F���U=
�4������p4�N���?��|D���]�M�;QQQ������m����{�n�����w���>J��\�e���tV��q
�iB��@ �H��46������5�$L����f�S���8��T�����m
c��~C#��������W�!'�R�����z��3�S@����2�=4&7v�A�Z�c������{���9�R��
�Q'�����h����w�PcN�a=.��P���p���7nLOO����� DD�8q�����������I�R�x	���;9^��+���o	�@ n��nJ���Vg�pR��=hknr<r���As��?�%��?�]����G�\�^����5i���D�A&�����:��h��'''�@H�����6�R�[?�D"O�s����G-��K�f��HA>�j�����3����u�Rv����W/�P��%<�J:�o��Fc[��6�Q%p��Oc0�!v�Y��A�4a�Up�U���	�8N{^���111��K/q�\�g����V���e���K�\�!!!P]]m3���::::��2h IDAT88�����1N����<�zM�C]�X,V����F6Y���S���.���5<���:
�X��3����!I�;gG}�0����b�x<���&����c�X��XR�n����F �$�x
�����L{|d���o@�%on�2wU'
"I����Jv
��`]�N���g/������49>������]�|��a</::�\�bS��
�`��P���?�|0KT�������kllt���R�\y)��vu|��~���6���.�_jSY��y�P(���eee^XN,�����B/����p��M�Z��1�l;:H0!P8!X8�I�/�A,�T���E��c���J�Q�����CBBBBBBt:]{>�


�4���!Ir��E��q|����}��������z�-�(���Z���������k�������@@se���������H�����>Nh�k��M#!Z�_��MO������������G�>�k�f��Ow�-p�����u������^[�RT�������+��}��@tu�(�)Hv�S����^m�7n���~������{�C��5T��G�S�������<��pLN8{����eo��&'G@ �!I�I�N3�18��e��H�n�^�T�z����'[?���p���C��R���
�L�{�(?lHsS5�����U{<��z���bz��t���_Q�)Ph
nU��1��a��i��T)/�Q:3;:���'���������6�����X��_���P���s��j��={��?O�l��5??����o���W_}UUUE�8��z���7����FkuL������Z���C�y"-�����F���d>�NG��@ ��@��G��F0-:��H*�J������5v��:���}������Cz}e]�	/����+]R�<�C��u�%	�j���O��n�Wl���q^Hs��/�����c��}!:$i�VgU��.��($J
O�2������R���XY�<R��_�<b�����i�m���3;;����c��INN>p�@mm�L&��6�0��R]]��x�cZ�RUe	�s������\������
������k�z�������Z���&=i��������P(�����^J������X���L��������$YV��<;2�l���:�n���^X���JJJ��d�������t��m���Q�lv�>}<������'K$�����w��w������{�c�����~,�������@�R@�^��z�;��3c������������Z�����F�-^�����KHH�q���[�����u�Ys73i��E��1B.�+����;v|��w6����VVV��o[�`A\\��`�����i��-[L&��/����O?���}�����yyy�����m��+>���_�5--�={>9�y&]LZ�7L���a��#��5�
���d���34����	q�GDne3��E ��Z�	���DY~[�����5�)
��w�$�wF!Z�&=p���:�I'I����R�����x�t��u��A�'�m�**�MEE��c����u��x�)��J[��9z���"-4��{����^z)""��b<��������KOO_�~�����X,~���6o��f���c��M����g�		a2�2�������orrr����p��C�}������B��������}��/��Be�2���>�i���#G��b�@0l��-[�|���vC����+���Eg��������@��
���W���K��S�?p�oA�������OK���_�3���
?p�y�J(s
�2�05��o�d�����S��r:;$������k���,=Q8��vC����x*=pR��I�O�P5:zG�l1���E��?���v�a��0�����������J\\��&�I�����2������f|#~�F z�����/,k�����III?��#���_�re���?���C=��OP���|�M�\n��f��g�}�^��d��!C���;w�������&��r�/��b��	k��2dH``�������`��Y3f���y��	&@zz���s��������t��/��B�#�g�W Q�#�ya���B#���#D�t!���7�+Yr��X���u?�4"r��6��A ��$y�y�e���#����M����������8S�C�m�D��)��+Y��Zdz^���%%
;����x6C!�34��Y�����=���`Q*c����*��A#  �h4VVVR�c�`F���~<x&O�l=f���"����3J���1�D2���+^aB����@�G����#Gn��������.�u�p���[�>��S.\������_^y����III�����7�|�?>r��
6��������'�X�z58����o��������������TWWgffN�2E����Q�����7g����Og���s����������
+((�����a���+�(���������C}��v!��E���2n�Z�/rv�����e��W�� O�V1(�����]�#���a�����G�/*��p�h����
&^���^���;���	/O�;2g`���#IA+�<��������RUUu��E�e��Y���;799�����(�}�v�����OQ0��l����qe�7aZ�.%zB�#������@ :"�l��)?�Rk�d2�<y��^�r�u�u�r�����h��_~�:�I�|���;w��[����+�7o���w���� 8���l��%P[[���o[[RQQaE�N{����l~�uc(I�4�p\g����r�
Nh������!u>��c���>�����e����#R������5���k��[�9����1���{�(�c�
�K��
��Ke+���j����Eu�&��X4^�(uP��S����-�m�d�n�0/��_�+V� ��?���og���������?��A/��"�Q���f����������9s��������Y�f����:�+c��.&�
4N�����B�@ �9996=T���r�����:��}���'NPY��1�L�`���d�&����2[0e
���u�V�������e��� |���
���O&��:�������d"i��#��(oW�����,��l6[&����f�`�4����aB;W��AJ,��*��r�f�������c<so�`����E�/�����'
�0O�nva0b�����C�x<��i��rt:���$�B�W�J�G�C"Ux��Y����V��I���nv����]�Q�����j ��3V�4�,i��at	w`�(5H�(K�����Qq��q��$�7h/W(�&��0/��������G���={����/^�x1�y������ggg�����+����f�JOO�z����������4�;H�����h:���Y�@ �=���6�Z��&66���O�{�aPAAA6������md0T�.�86��������{w�=_�D�V`0|>_$�:����f�E"Gd����H5O����q\7�F�1k�
{Xq��N�o���r?��U�P���o��v 
��e���?�H��;V�f�`�m�[jS�E4�
���:��q�N��x�I��f���dz������^IJ��r�n�HIr��d7�Q�S�o���.���j���q����?Q������N#L:����H����kr�W~�bH�����I!�Ge���;
h�{������&L�������.]�t��E�����F��={vJJ���#�l��7>l0�:�;H�V��x�)%�A �#���^H+�6QQQ���xii)�m#�466�z�������=���w���e���+�(�
:����������������EEE�h�����HR���TEEeN��&�|~TTTQQ����������8������t��oJ����d2���kj<�PtF'��'z�I�T����	w�~�{�u���%I��}	�p�]����SQQ��l�\.�W�^j��;g�d2�����W2&&F TWW�w�?��Z��2��t�\��������T�0fB�����3I����!:����aGI������A��P�M��t:��N������p�u�$�������m���#��]�)�w����=q��t��L;Q(B����>�0�{�I�7_���2�_���Ax��)#���K�K$��?������~��w��z��:���@��J�y���S��y�Ga4F��-4�ky�R��B����m����d��%^
M��eD��Q.�k���eK�4��3��e�#�{$j�
=��������Rz`D��
d�f�h���M��@ �
-�������($*��%��
�qp����p�"""�i��K�ILL�x�buu�SO=E�p��3g�,[�l��!'N����}��)sb���c���HO�:O��V��������	�O��5�D����N�p���&�e����fx)q	���r��;3��#E?�!qK�$����7���wspBkN��j����N�6�o7cD�Q*��%�g��!�NB�����f(d��1T�&L�������\����rB?~|�����������	��M���4h�u�k���������o���
�����^3�t��[Lh$�|�=�)Cq��k��Rs3h�rA�('��
e��o�����e!�3=1/���C �+9{���$�����@��O����/8/r�&�������[|mE���
7�Y�����j�P�@��#G�@HH�����|ll��O>i�{���)S�����f�����}{������/
�z�������c��������SSSg��I}�~�����1���q,�}B�(@�1O����=�����E�����znE�a�9w��5V�{ �8�8"r���>��/�h��F�A�G�������"%�'?�k�:���&�8� g�����g[��
�Y�P�@��={��_|a� �~��=z�N��L��k��}�/��B��@FF�Y��0l���/����q�k� :�E��������������}���u��o=�����b�5p��������,j��H(�
����n���hnrd�C&���+z�juV��AW+�eS;T<uz����XG$HB ��e:������>o���{���/���� ]��H%C
V����l$� _`0�����T*6���?��{w���yyyW�\�����k��]s������-2�Laaa�/_�v���***>����g�>|�R�x�D��~�A*u�u�1c�@CC���'�� �_�i�'�S`Y���F���{�P�=��0f[C5G����>u�)O��pBw�������#���?d�g���B?��.Y���b1D�]��n����v>4�m���M��c3����Zv+��n�3���i�	�*�ST�q!�=
��J���c��Q(��I�A���4���Jc���A =�.�=��o�:thXXXXX�_VV��k���J���,,,������������o_���X�`Azz:TW?p��=���}Q�����[*}�7N�:E�g�T/�H�����4o���� M�Q�;��!�������=
��"����C�I����v������Sz}b]T.�h���U�^-�'N��D��R��:]�e�������G�9��_at���L��wI\_���>�
��p��L����?�KkD�D�Vcfw���+W�\���d2��r���������o���2�L�T���eee�F����4����������������t��3grrr@.�CQ�M� |���LQQ��/_�B�����������D���z$�J#0�*�J_g"	'3:U������C�RM�4:t�{w3^�������vC�������.�M��IA+B��f�,�m����DW������r��_r�����s���91���&LM��	�h���m�:���<~��'t�$n�����V!]��d�f��tV�b���!I����W�^���N�6-!!���t��/^�x��y�\.������2����p�e.\�0`���+W���o


K�,����o�>��y��Q���\=M�3-���@(�
}��c���;�\;��K5���x#]u�����;�
���y(����~���$�.�����7����"�*��L�q����b��}���@�Z��&LM8�%q
!��pk�(�����YAc-O��&�%������*�K#��!��{�x�����A ��v�Z�Bq������]/��2���������y���~n��k���_����+W�P	�333���C����6o�����oU������..a������k
ah*���%�$F�x|3�%p:��R��s�����m����d��'����+��n�#��%4��y�LQ<G2��\����g���I��^��*'rY~�����Rv���`X��o���;�����`�Fde���i{���Q���[�R���c��M�4���1��~��}��Wo��6�[�N�T��L����)�����/|��7"��*�����l�2j������[j����_'�Tb���n�*��z<��5�����_�j�%Q��|pw�K���k�L�g��4��P�z���<f��1�%
;/��Mgr��%��p]y�@d��7&
�-3j+�2����)3j�
�r#�is�'���)�:Y��{������!���-r��05\���v!]������Em�e���_�MJJJ������>h����L0�LT�W��o�����K���}Q6n����3}����{��sg����`6�$���s���U�V���%`�Bh3����K6�e�C?��G5��M��~e�a���.e�������:1�
�<u���	�^����v��C �(�����/,����qla�����?��T�eFm��0���cb4�es����,�I���"��7;�`4z��m��I���6�:�����]�)���l�,�:�Kk��dffFGG���k?�pTT�P(�w�^vv��m������ �%�e���K�.]j���g�}�I�b��[7	v=�B�!��;��^t
��0,b��7�����m���T���b����b�%/)�.�m/��s�/6J|h!�������/&''����S�N����55'�
6,Z������w��=�����^�z��#$�������m��y���7G���S�\�Gc�.�����SH��h.-�)i�X���Y�8�����k��4eFM�A[f�!��:Qja�(���1%��R����������r���/���^�KxB��M�&���Omeb�x���/�A �SQQ���o��
�Gh�(s����{�����������b�4X�@�6F��_��)c���l��u�#_k6L:������}K�I��L���}_��/:fy�"�2=��������$�M���|��G��/������/'%%�������5j�����1����T*m����F����.�Jkkk����N�:u�����W^y��g����t"��������0�!L^(��rX�
8LNS2����_
h�I�HG�Q��$��w�T��
�0��J�$I�8ij�?���~=���$~�����Y��u����r��~����h�Ol�]"�"-�g#e���R�uAm��f|�i]S�O-B �}� �h4���{o��5����:s����,�8������&�����%���o��&s��F�C�4�HWd���A�����)2�El�����4���#$s�K������B �`��a��/'b���?��#A���/�|���������F������R���k�\.w���R�t������p>|��c��-[�k����,O���@������@�pp2}����)M�
�����_��
�i�rS��L�����x�L^(������A��f����;�P-�����!�(/�����&�����.�C|�=#�"EM�= IDAT$�s�5MH\��	3a4�.�E9�K����zgpI�XZ{���F��0N#�$������^}�)�����)D�%p�3�4�g���Y�!����)���wr|��H�����9�����T�c��p�6�2c�����3f�x���^y��+V�����������?��~u ���������y�i��y���,sS��M����u�>���4j�2�+{>9�c&]h����''��Y�������f���*�@��s�=�}�������Z�d����G�����999',,����HKK���z���)o��g��Z���O?}���=$�P��}��A��p����g�6���O-$M��RQ�K�	P�J\��7u���2x!Ln���6x!L���Q���f
|Yqq��~0�C�����
~�!����X����Vl�t�1��C-��f%u���*�&���L��Dqv����'�A��Y2J�Q�]��E�@�O��nkjj6m��i�&�@0e���{l��yK�.mll�����{333�]�{&���-4*�
|�)Mw���>	��j2E�a��[������	DA��{����/��O��IA+�%�g�,6X�E � !!N�:e�i0���bbb
����a��[��g����{�n�����w����>�(�F��g"Wb���!n�����y��	A����uE�L���������6�"qR[n���N��0n�gQj��/��u���]'�����L�fE�0
s7H�<M��6��ss���S;d	EeG}m�hj�:??�W�#gDg��G�j�z���;w�d�X���{���f��9�|�V����w�����w��]:[d��$0�*�����
���V�{O�6����,j�������������EH�	�����	�������q��RH@�2�6�����>}���36��������;w���r�<//������7�����'NX������(::�J4��S�H,!*AZ���Y�	� ��,Ug�����a ��
M��~��6�Do�S��|S�N��Y��5t6Fc�����0�=t:��~�hC`�#0��o�s�9�M��S� m��Y��B:��M�B z�R�m{�/���v�^��:Tz	��b�L&�Z���Q�
Dg��~��!33333s�����
{���������p?q����{���{���~��������?l����\)_��'4����.4�*��W�KfH��x����1:���x���_���m{v��)i�Y�<:(��8���.\�3q��y);}f�+�����5j�#�<��jO�<	111��K/q��U�f���j��-[�,]��r0		��j�d�����������E��j-��|����������|�
����=ah��K�h^���-�D*��t����[����[��]�$((�����w���`[�q����?��!�u�������e6�e�8{�W�1hlg_�0�cSi@��Ub0,������d}��H�r6�%1����������`8�}������^��S�0H ����q�',�x�3����BRn2�\�9sBD �???WnZ<��3gPl�S�a��A�9s���3��/OJJ����������:����{?����Zbbb&M���k'��+�K<Z������
�J��a��M���������j2E!��|R��Y�Z�ex���%w���?>�*w���U�@�9�,�h�f����h4��5k������I��-:t���'N����.\X^^��[oaF�2-�|(�����k�����7w�\s3   11\�zp��8&PK���y��H���d���e��|����F��d�_^�6#����7~w����Vps��L������?N�s��X'�T��e\�d���& Y8`�"�I�v�j�Cw}�T�X���E����mP�&���@ ��K�gL$I2/////�������HKK{�����Y�Q����0a���k��{�q�|��K�0�<s�X����+���e�����	�������3��k���[��O��������W'��a�P ]�QQQ}���9s`��������s�����={��?����uk~~���g�x���������r���m�O�����oll��a��9t�����s�0���H��s��L�� 
����r��=j��� MPv��P��$�u:]������r���\��(�Z�`��LVWW�^�%���H^�.cd���e-���LHH��'[�2�/
<q�,���*�?�](�����;����q�V����|~pp��`(--���z����~}���y��Bn;�7g�<@_����x��EII��u���['�u���-[���7�Fs��J�O�8��-~�B#��)�u��x��AL��AL�3�Q�9i����?&Bs�l����a��}
�p4��H�S�}�N�N��W��}6�}������o��3���m[�)���'O�3fLrr��jkke2�X,�q�����aM6���\��z	�G�2999��#��d5@�d:��i���c��U'U�~w�.L\\��~�^K.�k��I��!0KNN�t�R�D�������L��������oo�4�2c����x��������/  �k�)Q�����Q"""t:���iBCC����Zm�_�`;%�[�����R�*���i~yQ5+�@t	:F�������_tttLL�F�)(((,,<w��J��t�������ghTq��\)_��',�3g����/Qh�/U��0x�;T�){���]|k���m�>x399rU���v<�x�D45h.�ir�5���Sj�/��H������1�����W_����\|����1c�$$$8p���J&�I$����������,?K(F��3�F��<^��RVv��)~�=����M�����)�z�A���=��i|����@b�8r����e��} �Rg���@ �NN{E�!C�|��'�G�n�K�R}������^�w���������e\)_��'8��!�����2Pq���������?h~�����{������v�:G�0h|�`�\0�jjwk��k��k���k.��7\��.�K)2g����_�r��]����t0�l/Tfee%TTT$%%���Y*�0�L���s����(#1��R�f��2��j���^��pt���?��q�����[���ya@�+��������8�X3%����L�V2\�@ DG����6<�����fEF��������SM�P��[o�8qB$��L�qR����]�60
,KT3���W_��$L�{�#qK*��������$��ht��x�^�9)a�<�;�/S��	�*�������{�x��#F������n��@@@��h���d����aXJJ
������`����c�.���9�T*=d?Oby��!
u�t.-h���YvPI�:�2���W6O�v������+����U�o������m�?a�)t�B����{�y�e|�@ ��}O���{o���F����M�6}����o���\���SW�^����C�~����>��*?��/���kS��Hb�b�J{�����{���`@t2����=t:�����r���K~�G/��|���~�^{��x6�
�
�N
F6��a�b���*��=��1��)/�S�7��L�h"��k�4�	�|>�7��a&�V%�Z(111**��#H$�2n{x��`���TF��TUU]�x199y��ek��5���;799�����(�}������O>��G}D��`�l�2��qc;�tGc�������":�"��e��%��)�T����g�X�G=>���d�Z���J�Ze�0��h0i]�Q[a�U��&]�QSa�U�g�f�u��@ ���J��o��b�p�6m���G�wi����w������C�����gV�^}���v[�
4���%��7n\\\���O������ m��l+ ��9{D�6���������:n4�2���	��O���ws�N����;�����w��M�M5��$����;�)"��,�Ph������n?��5�b��2P��#�@��wo8w����s�=�{��+Vdff~������L&s���/��"A/��"�����l��y��E���[�fM}}��O?=k������;w��N'0�a�� 
�k(	�fyY7u�oNBX�Tz�p����4���M��yP��o
C |B�n2�����G{<� �-�f�����>G'�(�?�@ <��������/��Qd������/UUU4-55����w{-�q���%��9q�����Nu>���F���e��-�������%8��	r�.BG����ls�����&������N���~"���Z3p�@&�y��M�������0�+�Z�x<^RR��/^t8J4�	�<Z���"b<��Z��L~�j(�
$����*�b������


�k�	���aaa����o���rt:=44����k@�>}AQQQ]���m��d�v3���c}��d���Gg���~����/^���u������ggg����+����f�JOo�U�����������.La�9�y��(�K��{��#7��*:~{���y�0F�U�0����.>��@�*5��`��~�Cz��X�������Y�?�)h�!��'�����6�q}�I_g24PM���d�3�z7�D_�<��@�@����Q%,�p�E*������?jkk/_������D-��J��6�8��t����.-�H�r�����4q�u�������>��a�+�9�0X�^����%�-!����cv������d�������cEW			���v�st8�$))� ^���-.3���"����R�4��i0���|zs�mcS
��[�9W��[�>e�=����f����T��}T\��d��|��QQQ���������#�+��M^^��P{������&L�������.]�t��EmN����=;%%e���l6����v�aX�mi����D�����e�he���_��~r'Fk��V�����I�o�B�@t{�CF����p����o�ozH���0N��t4��D�kL�zS�dC�5����d�7�Md���Sr�h��Mt/�B���)�,�H�;��R�/��(S__�j^*�B�p>��p�|�GK�`,�����Se�����G�>3�a�7~�i�~s����U]�������b?`C��-���F��S��>���21y r 2�$����k�Jng5h.������N�N�������p2�$�����������	�%}�D��� �9��O���u��4�H����Q��5��2c-�L�9� zbY�y;Xm:�k�����7�":/��
eq���a��~�|���G�
0*q}���"�5F}�)�	i���4�n��e��Z��HYC <���FYYY�?�����:�h���U[���3n/�&<���:y����Ls�M�W���RG�o"�<E���mI��w-���K���3�V�������a]�45j�7j��n&],��F��R��,���/��9�bNb��|0��F��juV��tmS���
�$DO��gI��H�;���{�����t��v�X�E��`v
&"]�� �,_�2����
bF����[�~@��B�+n�F
3��������0Et��U[�����0����g6����t�5��~�������`8�q��7�_:$&�C���"��4�?��OF��~����4�@�z�'B���qp�iT?��M�UL&�|�whd���fpI�����8�4"e
����2k��MKK{��W:d7���d�����`?~���+�0�
�R���%N0+QFh TR�����2���������|H5<Yx����g���.�WT(�V(�s-	��T��?/���
s�
L�P.)���Zc%��kr�498��44���7<�hwo�Y��A�N��-t����/}m�K0��[��Y
�H��-�	���<��D8�{:1��-g�$������R[�`Kl9������*��X����,pv���D)R^��`j��"�� ����>��&��rD��rd�I�;J�����0��&f}�Dn@c��9���G������
��p_�9w����K��_��o�m����������������O�2e������EEE.lg*M�q�|�GK��X"���D�JC]gw��<�V�0M1�j���!�T��}kU�A�/R������&�����
�()g8��|.�!�!�il�^�V�����kr���a=��#���d����@��P��UL�$C�-w��!�������$���C�O���1mu���-g���<��]uqy*~:�e��Fm��e?C@���,	�-g������g��t�����Y�=�6�M0v���L���MJ��:~8��=���"3H�At'�ev��������K�,Y�d	I����"��� �H$:q�D����mv�tW��x��	�ie�FR�W�:qN�fH�d��}�~��lv�
�����Fe�o��~�u�:��TWVU�T*��L�9sl�aL)/E�K��Zce�&�N�[��[��2�^�T���p%���d�
V��vEHx�,a�i�/6��n*=�t7��h��R�K,��*@���j#z�������]��m�o�o
��Q�����N1�	�Zw`��l����_��382�g\E�������������g����C����F��J�9r���?���o9�F�=����>�l��}�tz^^�����m��b��>���_MKK��� �.��2�?��M�a-�,�d2�L��*����qT����%�+q�`	��B��)
�8��]����������t�_���+�6��Vu{��-��
OO�-���H_��p��N��A�A�	`TUh�r5����,u�i�������
K��xWXqXRF�Y7��Sj�
7C}K`7n�h}�k��Qi�W�y���KK��ct`��_�0����$���N�G����~�>���C��hK���B�g�����4��A|>_���8Nc������k��
|�~MN�S>�r�1ow����E���_O�7������-Z�p��~����|>��_�0a��g��a��
�0a��[�:�D��}Q��w��@;:W��x��	�iqv�r��x�@��O���D�����i���]��[�zN�����'sm��u��J�&�F���P�f�3��_��p��5m�f
��	��	���
���4���YC��_}�����{�v�\��T
AAA��Ha�����:��d����Mt�E���������h���1�MT<����b��8�������h�O��:��p�\�����_:�.y-��q:� 8�I� ��e2YcccEE�!1��O2�"1G����1���E/�FF�co
5����1�����`�d??��R�����0���?���W��;!���iii$I~�����c2�������X,�?�p��m����7o���������d��!��/_�`���w}w�.���������vt�l���H���.�)@���N|�
���O�i_��~7(���8�<���K�
M@eNx��1�����F}eF�"��=:��`t��0~�a#��������3Mw�i�s�
��h4�;w����z|����555��)%���J�����"�����t���^�(r�0>�\XX���6qs$�?m�y|����BPPI�UU�xN#�J��a�U�=t�qqqeee������0J�q�3�~X,VLL���^r�?~�������������Q3N�]��xT�v�1�t��LTT��`�:u��c������<xp���l6��O?3f�?n��9s����~��oP�{���={����qq�uU�1��2�LK>-6��h&}}e��EeG����gT��G��T��#��<�G<�-�/�
���F	�Q�L6��`����@�&�>�\��Q{���(D��3�<vu�
�V�L�bQ IDAT�������4?lFw��2E!~I���fI������,u�iMY�����#<
I��A���*b
������Z�%B��P����g�<�@��jW�l�E�����
��D����Y__���p����F��I��w/x��2I�F��;g��c����/P/�����������L�������#G�l��i����G�����D�%K�@mm��o�m�@UTT�Y����������J��2�EGG755����:r��`ZBO����	CW���>��_b�0j,��N�%?S��?�Z���D4U����YT��,����#����\cP�Nq��`��Mj�N�7�V7�6��qk��<0E�$�H�A��:	��0;vlHHHnn����fh,>/x/4E1R5��p2�18���D���`TUj�r�L4��xCt"Xi�U
��C@��k�����4\����M������Q�Lg~;�Y����n��n���K�@���)Q&%%�n��������������r��@ ��^Q��b=����<�LRR��G������#k�����k�]����A���J���(��L��W�ALa�~���Ye�Zf����$�$`��&'�.�������"s��������vs8��mK]�3��!�@��%Y����3��)
��D������D#�d�!pRy'��YuIVS�i�D��-2QJ=���V[����:Sl��6]B ��In����_�!I�n����B�^�f������`DFF���z�����w�����3i�(���1`�������,X���O����_~�e{V�R�tL�FW��H�_k�
w�2��kz��Fg�"g�������9[0�����g��L����
���V��	��v��l�M
��u��zMn����P��^F�o��'0�O����O}7>�*��Mu������
y;�A'A�� ��D�F��b�����N�Tv���tSY����s���6�a�TV�C��3#�������u��|`��TVV�
#���$>>>&&"""��w��q����D��/��X�C�������6n����_ZZ������RSS�|�I&���_��������l�����dU8c(��(���'L�M����	��|���o�B�#�TI��RUr��.3$@0���$��cl's4>�-�jj���\��S��4N���������+��3\*o���{�����"�)���1�]��(&U�
$��ko5��"'��	B=�n�����3>�jm�3��g�	i�J1H���k�(�\y�+"\r��E�*�
ERDJ� @�$��J��&��l�����n��f�;i�������9g����g�y���&�v�A���'�L;���r� �R,�J��$�LG��O,��F��8�y�����[o���{������w�}�|��C�}����v�����2��fa4�*���%E����9����,���A/���n����0�������$xn��ft��+�M�y���~�����w����9����/�2�Y��5{�e�M���VdP����k����D�=�����e]��]m�H�:��N4zu���{�hr������u*b���Dd��{���2D�3pJ��$���GY�uxW$������7������������j�������S 99y���
�o����#�dddt��e��G�u�����PZ�4HJ�y�3(PW�S����I��/)��$`N�V	9��2��c��"������$�C�%}���5M2�-��d0�����������=r����Gk��U59zS��k�Ze2���-��&���^�=�
��E��I�������N4W���	9�A�fc���3.��!���;�����Z���D/�Y~+++���CCC����Zm�BUQ��xP&&&~��'��_�p����QQQqqq�!(c"�������K��g�t��$�6�~Y+���WR�Y������v���Q#�O�=P)��T�`��@'��l��)���[k:�,�U]���uam�~����7,;�H�>,P����F'���F-�*B� n��V�j�j��K�&''�s��x<��e�@���>}�^��/�,Z�())�G�/^�<��E���0�H���r�����YvvvTT�B�p��v�L>��$zy������i)���ev��L��V�	��P�����eJQOzF'�!�������P�g�\}�Z����V'y0e���3r��ATO�N4L&�fu�}Evu��������k�c'd'�0�T�Q6�!B9(&&f�����MS�T������LOh����j4z�u����7O,���w���t\� ��������6&S{�D����LYY���o�����=��6A���,�ulfB�,K
T���]������o����j=v��<&Mi������J�����q�mwq<��u?f#[���T�������8Y�A�L.y/f�E��-
����;�H���Md���
�C�w��h�/V���=Q���F�Q�0�`��&B�HYY�F�3f���w��;��r{��E��MKK{��w�-�������i����������/��������������f������9��A��'O>��������}{uu����~�i:7Rjj����X���)�X�^�Z(;��~P0.�Yi
�wT��@�z���l��lR��3��U��8���sc����1e��KJ\��5e6����O�G/	(
H�D6��d�����{A��u������.���
��?$������T*��a�6m�4l��~��1�7m��`���o���m�v�������������S����w���)))PZZ�r�Q{�xPf��
�>�h��=w���`����/[�+f���~�zHMM�r�JKk�pI9�,�3��UM����x������������"G�J�l������!Iu*/�@�����i,[pY�)���'>����sr0>�l2���2IA��
�,��e�"���L'�gw������Dc�{���#y�s]um����hNkq���N�i{���j����#F���{��r������?����op����n�����#11������������`P5��A����o��i�������<r���k����e2Y��]L��Q��/����d�	I>��^b�J��`�����~q@�8 �o����,��=��{5���L������5{=��s���0[��PDh*j����1Q
��x&���u�*�{,���3�^�5L(���2��������l��W�p���'��/��Zv�a2�����s�2l��:$��J\ip&�� �����s���;w��6&L����������6���


�����u,�e`���������*AC�:t��7n�x���/]���R�������`�RS���a��=�{�B�Mz������{U��k0H�yYe��Rwq�{�����<���x~��)��)`�U����'�^9U����&���lg�,�����f\��f?�k���>Zs�o(������^7:1<�������*��N��:e~�.]d2Y�;�S��B�*���p��`����-d!@�����D�ZP����P�egA_��
�dw;I�R
�-�2����W��G��w}Uz����y��������u��z�j�Ju�������-X�������C��V�P;�����h|���6l�0}�����������J�������������/uu������`�F���-o��lt�iG S�<��y(sMA���n����E6�@���h�n<[�C�V�	���=n��t�`:1
�T^^�����fb��~��'������UW������D�aaa���UUl�������5�=_E���g��/��l����d��������,�P(���n�������I/��>����Q��E�j~)������'x���'��)��;Q��Z[p�����]5�t������a���������ZrZZ]�w*|�{����*G�C/��Zq�����B�������/_�P(:�t����������s����K�v�Z�Z���D�L��2�[�n�S�7� �N2|��}��
�=�os#2C��������#�1���nP]�������8�����{�`�������������Tj{_:[p���Y��({�`d���e�����*��k�&���E��fv����#v~-G�L&v�2�/)�_&;��IU�S���)�s{�����{Sn��lw�����b�p�y�u5�E��g�s�kn�e��&��%�l��Nr�� �������^"����V�B������2eJJJJ���8F��������._��5���'��2\.�hl:�����o��!��������e,'W�<y�H$���?��x#�Bi�����nQ��nA
�+�����G�g�F]u�q��=UW~���sm�Q;a���jN����_�8
OI���aq��d���{���A&e��F-EP5��F���Ze6��h��o�P!	H�v}X�/��0G�fc_�/�3�x'.���N\[p��N�;k�����S�'P&�2�������8��+�:4$$D&���s���S[�l��{wg��#�9���e���>k#.#�H�-[��+�0���H"�DFFM����2�����(��]Y;�������@������G^*�h3mI�
�A���AH�P6R6���O����A�Q(;T������1"������"""����4���l~��U	�[
��<C/!D� J��l��L<u�4��N����V����rT�Y~Q�c���Y�����=#���F;E;����:������'�>=��0��\��U�:c
�B-�b��+V4w��������>�sr<\���Or����z�~�� �I�&�[��K�.`OD�����-..�gx||������V�:�n�����������WY�o��K.�{����2S�f���C2L�}�"rB�6%�$�=i�-������^���&����������nnn���&���$��{���B����\����������|:�}�����I\R������� ���f4k>>M���L<��|�^Q�"P_��j��;������:����6���������s������3���'���x�1������%�B��A����Op��i��Yf�
		Y�~��������'������9E��g��hK
��N�Qgg<�e�v%����/&�_L&H��K�"r�[�D�w�XW�Ag�����S�k������*��B8e��t�$I���I�I=��������Q:�Am���[�7��&�0(���^�x'�H H��d~ ���P|�2����N�������a� ��r?+��r(?�A�B��A�g�yf��-�'O��c��)S�z�@ X�h�[o�%
�����W^���:��:��8�2fA���wr
�l��=^�{���R�{�,l�[�Dy�h�b�*Ar�'�c�3�r��'���7`"�,���&2�Y����.�(3hn`P�K�;��|{�I����(E�����
�4h�j��OF�n6�E��������
��3We:y�8�B��8�����?�0q��_~�e��u�|�I����l6���{��7U*������<_������N��A��l}���3H�D:�-r�"r"O����\E0W����F��g����k�
�"�����h��Q�<==O�<����J5j�x��tW�����kr1�/� Lz5�<�}��J��K���{7��F9A9,;�T��W�v�J7b�����������\�����q*�k�u���B���E)xw��a0~��������$zeZZ��y���;����'$�oYiu���`��Q]�����yb��n��'��j0e#����>�>�-<���������s�9�_!�"��d:�Zs�g���@��"[_�}��L��S<�+r��!Rh+�s;�����_�*F�Z|m��#P�c�j�*++���Z�t��yP���y�v��5y���;w
��<��/���I���p�^�/5DN+U�����gk�Z�w�G����TtK"��6&H���o�������U�v�n�f��f���{&��Y��s�8��Lz
��F��]��U$v�1�e7�)����m����;N���R�3����D��f�E��3�j���k�P[�������7q��]�v�D��={��������\#�T�rmm����*�����3H�H4������qi���xR���z�~�l�V��P]�Suyg�*��:#�~�"fYb�f�E��e'�/�����&�?W�icG��
������	/��N4����;a64��$~%u�
�,�<<���������6a,�B��2p���q�����w���A��7�v�q����2����Q���:���JM���rE�E�DiPb�0I�H6R6���Ot�W���V]�S��
T�;�j��A�L.������79�S3�U[f�����A�)�%�}	���6:�D/�d�tP:��n}��
@yz-��#��H����nb��@ �^x��g��
�H�

H�X��C��FsQuy+V�c�g�(>�>W�)�����G�hpc�w��w���%��2M���k{���2�:Q�i��D�j��('%,�,�Y1�+3���	8�8��4(Q����K�����V�h�`g����	�eyp�B��A�Db��"����:_�,K�����V�Lc�-�/�	�+��_9�-j��+����bOe�e��l��?���������W�,��Q�g�R������.!�8�^��:��:D��{'���o���W�~�Y^*�� �oeBu8����5�u��8����F��������(���^^p`��=T9Q9A�uH��	�+
(
0�=}E���������(S��5�+)�+f��D��K���8e��e'�/���K��B�p%�M���'��8~�k����k��Z�J!���5#(��w����A� ���/3�b]:}Evi������R6R>R5�'�ipc�{�w���&m�:���?T�~5h�Y�3Bm��c����%!T_aP!G��j�G��k���h���C�v�1�2IAy�C����B���9�~���=�F}9eXb�V����?E~��"'*�O�7�1G��?������=���� �~7^1����e�}j�w��1�/BN�t���AA#�^��5i��8v	!�P��A�1�,f3�:�)�>�l�-8[[p���r���<|�[�Dy�(�+��1Ar��t��v�]����E�u�-��+u�;���!�a��z������B��t� ��h0(�4��/���orZS]eN��
eg6�|�<b���xE�q<�o��^��V+�WR�)s�z�I�1�5��j����S��4�Q��O����bfYj���j[�2u<vv���L�*�BfYB�jr�t���!�PG�A��YL1K��IN[a��������Ny�("'("'J��C�l��e��5�L�J���l���Fm�IWi6h)������P���Jcm9FsP[P-|0Y�P�p�%���*�0x�`�J�o����R��)�>�L����p�B���2Nc�1�V�	j������r���yrE������#I���=�!�BR����5k/�Ak��&�����:m�Y_m�������������z�I�1i+�-�iLz�IW�'@N�q���`P!�)�e��-����=�A!�aP�i��D���>�y��X���A���eg�*;����{c7H����@ ��@�����:��K���l��
:��^��:��5����.� m���mp2��k���%Z�������Y~Bm��x���-Tt��"��u��B�e���WH�[�ZcIk���ll�v�2	���RH��B@s����:�L]��=KMz�I�2?H��2�T����^e���es]�xY1�w��s;�2�-�sH��2�e�f�EY


]�l���JeZZ4��� IDATZrr��-[(�;l�e�Pz��J�7�U^0�M,�!�be�C&
������be���I�H���o$O�)�B%G�$�B�'�����+R\!�q�Jz
I�)�bO��k�������.�vm�~t��9������y�N�c6h��^4�=�8�nu�x�-@�����=���Y:B��4hPJJ���{YYYyy��q�����C����,��Yr�����iOB-TVV�������N��;v<����x��?�����]�v]�n����Je^^^��]�/_���o���[�y���� j53(���OE���r����]�]Wr�f���
Z�AkP6��'�C6t4����������p��;\�W[��C6�W�\�ct9�)���$&m�V�7�$f�Sg��s���Bm�H$��������-[�j�*�����x������������R���e����!��}��W#G���U(��u*0(��|v��"�+��C��o��J���c
��;�^��������������

�j����6$WHrE���H�)9B%���.�C�#TA�8,H>��)��bB+���l�R����h+�Fm�?�]Qw�i#�:�+)����g���a#���<yrPP����W�XA?�:y��o��f�������Z���>��+l�B�[������k�8p���:�/_n����������S�b1��;w����WWW���V����t�%�G����+�C��1���$G����H9Y������r�6���#�q�
R �d����8BG(��e�Yy�]��	�m��L[��IJ��7��je9�K��#���q����?�l�����^�f����I�4��HA�L<���j�����
��*�P�&���/�������y����7�j�P������5k������M����1c��=�+��;�'��}s��K��.�Lw6aV��
�������T�>�/���89)�q�RR ��89G %2_��q�2��d\�)���0+Km9.���:I�Q��m�-l�BG��\������J�Avu��������+m��G9����������=J'X���'��={vBBBqq��c�>������;PnBB�+�����{���w�y���k��t��u����z����.//�|�����������3�}��s����>�h�����������\��k�>�������~)))�&=?~����xo��f�9eF����s��Auuu[�n}���.]�!!!999|d���tC���O���O>����z{{�.��w>�
Edh��������0j+��#��|G �����'v	��-,�����7|)G ��%R_F�t��h����Nc��=��eB������V�KKKCCC���Xh�o��r� #���X���7����Jett��!C�F�oO�:5!!�^vww����>}zRR�_���r�,Y�r�J�C��������6m�SO=���?�+�O���g�I�R��L&����������Q�FZg����MKKc�OHH��m������k�U7�?���kI�^F�E����q�m�9(r:���n��Q�P��f�y���			R�T����D��|�&CK�Y�!������5���y<%�+;w��L�|o���/H��f;��p������g����7��,K6vl9�R�����5=�|�����L&s�'������Gv�������v)�@ �H�)�UWW�BAr��d2���6�i��d2
�-9-�J���	��[���J��***������K-,E�P$$$��
k<����G����
4����B�
���>}����r�u��BY�G�����
c�8�H$~~~&3����~~~�|:�$%	�4��������;v___���2l��QLD���K���C������KBB��l�������'L�@�H}��������}��}�����}���?>11q�������>y�d~~~������k�d�~���={����N���r�����Y���OZv��1:���_�������3x���;�}���h�:u*I�555t��W�^��_o��111������k���<yr��!�G�����v��A�hf��_���KKt�������	

j�]�dh�+��_7�D;	�B�X����%����s��T*I���Yo	�D$I��e�����f777�������O��pd2Y3�����<��`us�57���5�tOOO�7v��nu����N}p������7l*���S�4�m�T*��U8���K���e'���ZY���@�'ZrZ2��\�$I�'^������9��5j����z9&&���;55
O����:��r�	IG�XC���!�$�f����X,�o��A_���%�^^^�_������Brr�3�<c4%��hc�i���Ss���+55U$���O�:��~��\&"�n���A�:t�L&�4i���~�����'N�3�n������������kO�<iyX�Z��O��7o@bbbjj*��_s��-�e�����V����������7N�����={���D��C��r����	bbb��������[�&/ }g�S6[�b-�@���c�86q8�����kW��OOOv�4���l�����-�);�P(X����V���j��
��~����u��-����<����d_��$�B����v����U[[[�_��xzz�����S���_,��{W�R�P���D����1����t�J�&�I�ecB�1~]]]^�u��~��'L&SYY����B��������O��t��Af����+--�J��@S��m7�E���mB�P�T����=��C���+����K�TRUUU�BYR�4**�`0ddd�P��������+�����]v�zp�\??���|�����X|����wRv
�����#>>(�z����F#���������gc{�8q��,5##�����7o�;���I���?��sz�������YYYP[[K��!C��^�u���������;v,2�*(���BGd���S������6vyVA������������W^y����v�G�eM&S��^���b������Kg�������F<k�,�\N�t�������4�]3�$)����RI�Z��MW��xA��������r���j.�Ph0��g+
�bq]]�@�H$���\�G�Z����\.�����O��?Wv��]w�c����g����������eDC��\DDDYY;_^^^*�������x�����?���4C���yyy7n��l)777ooov���A��/��X�t����
����j�-9-�I;URR����T*��`��`N	X�t����oQ�l����i�Lf6�]t��z����p���[r�i���@��


X(���3**�d2��e
���\v��X����V���t�@(��M��������b��@F���+����������S�N���*r��	:(c�����@z4��h������?�`���?O�(*--�r�S�N�A����w�`E���:���~�f���'###�����^��U8����/((����+���7s��M�69\��\q�SUUu���&5fRz�]��N"����dff������E"QZZ�e��:�����
e)���c����PDEE�����w����=z���}�Yx<^\\�9;r�8��Q�<==/^���+v�2,��4�8� �o�4V�����(&&&""";;�Y��������E�4V�R!��1c���c��z�V�X��������;^^��������0�����eJ������fa�UWW[�-++��LF��z�k�J��^��2]z��[��V�
x���D7>�+Y��a�o���B1���L�yZbb�\.OMMe�Y�k���B��a"�]�vefAooo�+��T4z�h�V���o�
��2=�H$�S�)����,���H$�J�J/��in����r����O�r�]�tqE���h^P�����sxzz������O<����N+������}���y>^: ����[�kp��G��>��?���������7����?�giA��0���+�+S9_��Q�;�:T�C�-++���"��&O����>}���&O��d='b�����C***�����I�&1���__TTTTT�t�R�V�t��9s&��H$�6m�|��;Kl���r���r����~��L��h�O]�n������/�������o���c�����*w>!�����?��������~qq1���B5�����~���9s~����+WVTT��9����������Z�v��C��}��7tF�����������3f��6�R({��Y�hQuu������z�z�_|a��~�����
k��),,LMM4h��oZ��]��'�����KJJ~��'??�O?���8S\\���?:��m�(j��M,�?�P�V�:uj��a���wEq��h^P&##�I
3c��V��YYY�����d��f�;�P��:�v���_���x���RRR�57n��<y2;�!�:�>�`�������-99�Yo6�I���&����>}������W7+
��+�}�Y__�#G�X��u���G��g�}6}���zH |��7�|���f.t�(�U�V��5K&��d2�)��F#��=;�������0?�3f���+�_��;�P���:�������'����8p�@ �r�������d��N%'''))i���L����h��7�|�Il���?3fL\\�������|�A��U�T			7n5j��l6����/�����!C����{/��������s��9|�p�Jl�������0s�Z��3�}�]f
�x��m����)���|Bm�� ����������[�"����caaa�{����IIIjjj~~>=n������o����O�>�����/,,t�����1c�����������9s��L����?�{����G����������K�.Y�����/������hp���]]�/_�|�r�����qqq			���EEE'N����_�f
�.^:vH�ef����z8�� ��&��A!�b������4;����KMMMMMma�Eeff6�!���;w��a-;*����B�RRR>��czePPP`` ����6e'b��h�}�����[�D���8��h4�,T� �8��@@D]]��lf�8�$	�0�L,�E�@ �(���e��r�&����BYtq�����Y;-�|>I����S���xyyI$�V�U�9s�������t:��+��p�f3;'<����xf���@<��c����'|�A�$;e}
���Y;U(�b������W���k��e� �;O=�����o�_(^V9Q�;���_V�|Z��W���J$OOO�V�C�:th������Z����q��C������?���g���7l���O���o�{����#r>��2��Z9k�6���
��!��UTTXuXm�bbb��Z�P�CQ��+WX������{HHHk�!�*����|�l�2e��)S��L��?���,��a����y����1h����j�k�\���L���]rB!�B����x2���JW�T��u�L&[�p����������

n���y��m����m
���XIcc��|~PP�=�l6o��]����4!�B!�X�����g��*=55�����JG�>�����6�I���w����O�����p)!�B!�B��(���l��]��������[rr�+
B!�B!��H$�'�fG~~>kI�������'W�^}��G���\ZB!�B!�P����EQ'O������ �B!�B����A�D�T*]]B!�B!�P;��������1c 77��!�B!�B�/���n�]��|�rwww����=�pA!�B!�B��A��7o������o���pA!�B!�B�k�/�t�O?���_ti)!�B!�B���=ez��m{�^��M���p!�B!���$���;��?�t�Rgs���o��v\\��l6l�����)�a�e222�X�B!�Bu�&M=z4EQ�:`\\��={x<�R �SnK8��$�BaXX������^zi����r�>}L&Sc[6l��5�����/]��F����m[TT�����;���d2���G��7j�:77�������*���u�p�fsiiiqqqkW�	"""$Ik�5CGm�8Nzz�=|��O�>���uh���EEE555�]���Daaa\.^�3������������222f�����uP&���������+��
�L���o:��III<�l6O�2���������Q������


����y���[�N�>��h�r�������:��\.g��t�b������Y������rm�a����Kc�5v�x<��zh����V�j����`0��{��k����u������y@///<i����KDk��9|}}E"Qk�5OGm�����9���I������v-P����1b��%K���S'rww�s�������r[��A�����>�h��A���h4�����wt:]KA�C"�����/����;wZ�.!�B!�7AAA�O����ui)�c�����e�|���[����)��z}nn�T*����L�����1b���j��	�E��k����zY.�w��m���
��|�K/��������������k��m����"6+�B6�={v��-
����"D�PC�i����b�QU���Y�TJ������^����y�??������������$I����?�|dd��d�~�����y�f��lu���G��3g��^^^*�*33������
Cc����K��O�������q����7m�T��$g����s����r8�����?�|��-K�,y��ww��5y�d���,)))�&M����������������+W��D�dYY�R�LIIy��G����d2f@�H$b�	-�|����~�^���]����S�N����s�$�����M�H����7n���O����SS$�7n��e=z�����'�|��s�9��uv��������������k��>>>q����� B�t:��
!��,�F��U[F�p���1?_K����S��0"2<<|���t����~��'G������GM�8���o��}���������of%�����/-�y===�:t��_|1))����~�"�h���#F�`�$&&&&&�7��'��yH$�]�vYn�������1�������X}����I�&���X��<�^2d���=z�P*��o�>zMVV�T*e68r���.S��
�B*����R���su�-�xPf���|>�d2M�0��?��|K��������������>k��e���x��m��=z����r&�R���i�|�����W���>�(�����������A<���C��������������N�����3����y�����=�6�������SSS��\.�������J�Z�x����v����#�x{{��������u+%%%??�jK�L6u����Dww����c��m���F5�WRR����`����#0(�P���j��
!d?l���B4���*"��>]a���*kiR���d�$�O�~��!77�7�x��������p��H$��}��5k��������zzz.^�x��5Lj��+W���/~��g���		y��'�N���G��{�����~��u���1b���?��c^^^���?�������{��G��k����~KGdRRR�m���������k�M�>������������a��1A�RG/���W,����/�j����Y�f��5�����%K��;��W��,����F��u��YEd�a��i%%%$I�9����s�,���?��n��������|EDD0X.w���^��������e2���K��gzMttttt��	�y���������%K<<<��$�g��%�EC]]�;��c��'� IDAT�Y5�7o^RR�R �d����Q�F�^����c�[�V��C��������GEE9kn�������B��T�*��l�h��X!����
����L�%�55@,�����Y�Ta���t1u]hc�B��G��faa��3����w�.�6o�<s�L��Jrr2������9NLL��#G ,,l���p���I�&UWW@zz��;233�������k������U�c���={������|���7D"��?�e�
6e�X�f����J�$--m������e;@s�>�����v�Zdd������?���A���������>|�~��\�t����9>��NW�|�
*++����s��������rdd$�`�F����d2��������$_{�5��j�g&�����/gv�4s�L����9s��G�(��>����Y54h��ECuu����0[�$�����h`DGG����t:�188��	P�T��+�P��&�
��W�b{%��l���r��@�i��'ot�/c�G��N�be�������h<z�(��t�R��DL�&e������������w�}�����`6�.|����k���s��9���3=�{YY��o�iY�����+W�?�����o����
c����?���]yfXI���&z��{�TTT����OAd���R�.��o�V�Tt����������CZxx�����������5����z��I/|���������KHH�)S�|��7V�FGG�T��/j4��63f��q���/��"--��iVM����x��e���B�p��t�)BCC/\�@oDo|���/�����|���s��qb����Jz� �T�w�����UHH�'�|��[�w������{@�:l��l���������?}�t�����IOO�ZSQQ���Vi����������#G�\�|��-����W_�Z�*&&� ��8v��]?[p�i��3s���Z����������"����8\;����_�����_����]���=z����O�>�����8��2�A��
�?�������}56uE����{��@}������0E���`q"�����~;++������U�V1�a�
V���7o��;w���_|�E��4111/��"����?��mVM(��z����W��'�]�Nw��Y�hL��N6j�z���999�f���?��C�U����&��xd�:l��Bc���W�^!�0�6�\o)<<��D}t"^�HT2����&��r�v����^�g�+��>v:z�(��l%	}���1z<W�~���
ttF�Vc4�6��2�W�6�L�������x�����>|��y@�d&,,������I$8t��^����
L����,�+G`` �p��i:Q6�d21I�
E�N����L���,YB�������c�=��Y5Y�r��%K�,Yr��A�H;m����z�������p�t�B�9x��e�]�����~l|-ur�X�zc���W�^ux������?��5*�*�6y���i�BBB���)L�$44������&��r�"k�~���N:���?��������s������������L&�@��_?��9x�`K&���t���������o������~��g7n����Axxx$%%-]�4:::;;{�����B��0������1��+W�deeEGG�W�������x�x��	�,��~~~��A�����\g���)S�4�m�������?>>>��2�������buX�Z]YYY8�c��Pe9�!d	���������K�,i�.!K�^5vX��WG�Y�fM�vANA]�.4���������0~��-�}�%T*�L&k0C��T���q���yn9������]Z�o��6n���C�2Yc��j�����}��:t��c���p�R��)�c����G�������s�^�xQ����O7o�
r����#��q�G@mEEE��I_%�L�����7o�������e":zA$1i��?��l/���cf�o��t/1�#�<�d'����g������<y2s�PZZZ>2�X�=L����\��#��`c��X!����`{��[�E�o�6�rD��B��G�[�n���NL%+��Z��>4�����gll,�P�~��D3d����Xwww`e2��������?����[�g�t�dxzz�5��rQ[�\
t����|�����:���B�!:�����h�_S///�cZ&�o�O��7������\.w��Y��oVM�\�����������6l����O��=�*�:���0�V�%�~A�y����,_W�\q�1����j��
!�,�^a{��|\j������W�������������AA,��;b�(..v,VXYY���
��r
`�>4����1c����{����Lbb��1c ##��5������N�h�'O���h���J��n`�8�x�"3uEQw����3���7n�h���O�>�r�DV��X����*==�>Hbbb\\��K���Y5���cf:��q��'����WUU�t::�U||���{�������-���=m�4���C�Z~L�,{�������,m��a��9�������x�	�ehh��e��T*�������l��cB[+h��
!�\�^�W�|�i����9��������=������36l�`�Vxx�?��z����/�,Z�())�G�/^�|k��E���������/\�P(3�nN�8a4�B����.������+�X�1\�|�`0�x<f/}�@�b�R)�<�y�C�z�*�k���?y���;w��{��A�{O��s<�#��l����?������J�����e71&M��I���IQTZZ���O�>}���'��DM>_j�X,fRs�������'2���K�p�`�6z�h�i#tn|�Zma���=h����ww�������q���7���z���]]��
+v+�Ph#��V���g!T�W��W9l���7o����X�n]uu��m���qqq�w��p8F�q�����u�����{��?~<�!b����Th��\]�o��6�|����c�������3g�$&&�oa�m{��<�����+W�����YCw��(���[�z�b��;w�r���w'%%��|�9sfTT��+W���CNNNv�V�o�>z�(��:b��a�kb�ov��9~~~Z�6>>>..��aw��IW^��S�N���'$$86]dd�'�|��[z����>�nm�P(1b�������������zww���D�������}��e�V�2�L����Z�p��;�?���wp�X��X���4�X����-BM�����
!����=������d[�n}���322BBB����3��W^����Iyyys����iS```FF����sss������s��I�F3z�h&���9rD����S&��Vbb"TUU�:u��u���A�A�Oi�j5���t�������b��JKK��_�����7k��Q�F1�������t�k�u�V����g����������\�|�^)�H�M�6s����8���<����f��$9`���'�������d>���h�~�m��l�����f��=QQQ
n 
o��e#%��������?�b�
z��'O����`�������+�P�`{��j���93h����O@``��	bbb�(((�6m����[x�m��
8����A����?���g���&L�s0YN6�B}�����������24:�������JdC�z�1n�����&��g�nIA�����j�@d���g��I/_�t������6e�z������'''���,Z��*Uxee�_|������[��---�j�,����������]�v=��c��������g�UVV2[���3�������_���c�����������[���`��	�6m�?�[o�UXX���O���`�^�_�r�SO=	uuuV�S����h4j4�����'O�����fq�U����y������AwAo��q����������?�Y�f���$IZ�76�T6V.j����&�>���j[�
NS^^�X�A�l�\�^�x<fY��a��B�q����j�����K�.]���z����..\�S,%&&zzz�����������v
t������u���G���D����J�2���V������T��<x���2t�Pg�������m	{���x<^rr���>jW1v���5����d�\��b>������r"I����t:��\N�duu����ADDDDFF��r�F����������>�#��(��SM�$""B.����6V� bbb����j��+WJJJlR*�������g����vq�P(
������
A�B�n�R)�����mr���p��~����������>}��9s����V�~'N�����~��g��5j���Wff����>�ls���A����;##�r������P//���2f��Y�


��I�r����Z���#�F#;j�@$�F&C������HTSS�BcE��Z�V������+OO�~���d���
+777P��6�R8��7���<���x��d2�$h���GQ�����X�V1v�X;�^��*�N�������x�]V)
� X��������������Sp�\�Tj6��ju�w]�^	�B�D���������*�$k�U2����������'$$�sY���e9"�e�����:��	"##���~��G�����������{��_~��U��Z��'�?�<���t��������U�P(����������s��*&�HBBB233Y(&O�,���9c9^�E8�����p���]](���c�����~M�9���1.^�XPP��������������455���x<^\\���v�5j���gfffNN�cG���IIIqiP�1aaa�L�	�&Mz���g������?����N���'Ji�"
�����x�����(^���������"�r� ��d�r�(��M����������4���i��-|�/^�f���|��6��g���
6��7��W`_	�|�Q__����/��x��|�A���$�������(
��n�G�QG�T��b�������=�|��F���x<�b�466��$ r�Ujkk�o�f4�Zm��"DLL444��0�!�R)EQ�Ip�H$R����t��r��9l���R�,,,�Ni��aiii����/|��W�^EEEQ����^.�{����|����N�8�p�yT*�m����mK�o�����������`]�������555��<�x�����3�c���
����S!�����~���]r[uu��O�5k��L����h����g�P��M��[�����������3�@\5�����={��?�HQ�m����;���5�����_�0�eZ��co�333�"j,�����;�!���X,���)����qqq��a�z��������u��R&����@� v���|�r�Lv����K��)H���{��'�z�)x���B{��T*���*����y}��};��2�����s�!E�@ :��a�Xv���{���_>|��g�������� ��\`���$�n�:����C�i�N�$�����2���p�<���V�m6[��]�T�����D�$$$�B�N�������rAt��f9���<�)`>�b1EQ�I
�D���.����"��D!#/:p8�J���~����(�<�$����l����!Q��D�D��X���]L�$�0�(���"�sv$IF�J�v�ry���j����������O�6m��]�
b��<��8�w�}�|�����z���{�C��L�������(�����B\e0y:�����@\�l������#G�����g�^�W�T2�, X�������������;�P(dE�����H	���z�>�I|s��j�����<�
<����)�r�-���������(L'��5Mt��fQ�����QRRRGt����������U���8��p���ZA�0RU��>�C�p8�~GG��\.7:3����3�`�aX�.&�a\.7:Z{%C��y�h^I��*��M~������E��5*--M"�TUU9rd����w�����w���}{�v)))	m����};��2%%%���IIIQ(2�@ ����7�|s�~����SWW�R�
E��������i�c`�1��@�����j�����Z�����d
������{I$.����Bf��!��MIII�D��LNNEQQ��b����6:z�L&KJJ�X,�9;�'�J�v%���D"�N�9
������6�j�K�,	�1

�V���tg����l=��S�����a�t�����aI��6�e����������p8l�K�E�9�@ z�+�@ =��E���~���/���C=��WH�AD
�F�v�kkk}1�������������>|�T*=t�Ptx�=� �n��B D���K�f����_|���O?��w�]�x�h4����?���D�A�e��������^�`��5k|���O<x���'YQf��M�W��1c��U����,X�������9�@�8�cP�%�@ �?e~���+m���n���6�E�4�n����F�.��(�?��O?��r����{�������z��O>�$M�O>�����N��g���=����[�bEcc��?<e�����m��u���Q�@tC�|�@ ��H��~����+y<'6�� �
B� �
B�"��U���]jr����8��2�i��v(��2<F;�p0N�����8
c������|j����?�<u���>�h��9s��a/^���C�w�[�paLL��)Sv����O�<�j�F��V�s�.�@ �?��
������_~���f�(���F Z���D"���A}r$Cw�a�Vg��\�W�q
B�&cH��������9�������Y�a�=F;�]���rie#��lf��v��y��;������+�
~��7?����1c����p8N�8q��q���oc���N����7b��WTT�o�>�	qW
�A��
�J�]5{aa��������t@��N�9D�es��Q�Q��Ud��TjHU�T�J��7��b���]�8/�mK���3�d�m�j��L��J�,����=&v�BY
��B[���B[�����Xi�����t�����4�������w���{��a�0L~~~~~~�����t���@ �	(q	��Yp�\�R�U��US#����&,�V�<�� d����1r\�����pBJH�D�!il�M@������j(���!������)�����
�5n����ae�+z��=� ��� #�;�DDP|SyY��+�2`1��
oa#Y��J��C*cIu��CXp���Q�i�{�Hnn�~y�j1!�)."1.�b!�"v5
>����%Z�wt2.e1QMM��BY�83���4�#ebs,���1�bv��)B���yw=� �.������.���N�EQ:�������������f�9���E�O��J�Ws�a��R��U.Q��4��K�����g�����44x�n���h�/UH
��~y�smG�	���p����q�����Ip���p���J		k�W����G���HeJ)��Xh��27Q+c#�S�Ra����&�b������8������P��H��s�UR��m	]�����;����|����/_n�_��/z(�*o'�M�Q�nh��x��Q�il�:zX��Z�P�?��vo��=4����C1#YYY���f���H	.��R1.�"1!R26G����TB��M2B"%$��#%$2B����?�8\���2�����=�V�t��N��y��2[(��q�0�(��3*��$�]�[=�e��$w�FK!�}Y�DC��;���v��3yHeF D7%tQ��^�q|����z���^QQQYY)
SRR�����
�*�*==���=��s}���2e
����m�E����c���s� ��jO�����M�B�W�D�PB�^��� IDAT���
��"!g�1.��2	.���j$j�A�v����1�\A�C���Z�q,Qx��kK����Yg��]��@ ����T
�K�j�����@ ��(�j��
6�������+V������[����{���s�����&M2j�z���/������'N��g��p���8!�$8W��`��l�Y��X�����|�f
���Uc�Z�hKHH ��R�9��fl��F�u�������s�r������oP���_����|�/���
R��x\p�2)Sr>��2�>���������^}5O����:�� DKZ��,�^D��@ �-��23g�|������n���V����O>�dYY���+7l�p�]w�t�W^yE�T��7���B�L��]E��^��A�n���=�-K� �v�a���wm�v$0B�Kd�D�\�XNHE�H��{%!��T������uv��XFH$���"^V�D�V�?Kx�q��Z�����@ -i��
b%�4���-���@ ��c��y�`���-�W�^�h��w����w��1�������7l����Et+��OD���C��o��prbr��o������)��q�&V
R�&X�	������i�"]&��sY��S}5O#Q�@t����f���R~���q/�_��E~!���2��������
�0���&L>|8+��9s����������2q	=#��A;�S�ia_����#��!�����_�Z���d�v9Q6Q�K�8K��5���/R��8S����������	Ja^��X��@ �	��M������Yjj*���r�\�$Qc�An��6�R�,
�&���S����N"��(��]w�Y�1j�v
����;���e�L���p
�@ �s.�2N��L����mLv�s]��@t�q������5��F�q��A���D��~�i���Q�
	B��)((>|�����s��+�R8p`�����!o���n�	JK{�kU��|�P(l��$I�p8�w>��aX�����i�	q�od���?��F��8��Q������</jWH���t�� rs������9R�T;v����<xA�F�<<?Q��h/�4�JQLcG2������C���;���� ;�:���b��>j�p���*��79���;�����={�������Z5��������:O�<y�������7o�j_���2�����>e����z���_������l5����?��
+����o��9����������v-�R)���`�;��$IFn��������c�?T�/��"4�?���H$��\�c9�a��������gc�"�R��X���0,�W��4
�b?�����ztu�����42��X��O����}���"�������)�x�
k���z�8��?���e�v���w�q���iz��illl%�_"��������3L�<y��Y{��E�LKBe>����������-z������������*�X���r����?<�}����x�O��n�:�L�p8��]�S�,6�M���k)�J�N�����D�����5O�������rW����2�����t��
�����FKl�B��wo�������������$�egg������������������
�UUUeee���
���GuM����\v���$� �.�G����_��K�X]e�_��<��)���5����|D �v`_�?~|���-�����3���������k�H�(CQ��w�����O�:U�V/^�����j�3g���G 99Y&����i����v�/"�\'�~+�o�U7���dN\F�P�q-�����xe�b�?�&��]V��7�,���9�!%p��)���o��<�8���]���g��?H*����AD�F'q�"����HAQT�Hrr���%� "J��~�j�N�6m���[�n�p�����m�x���o���_x���g/^����o���}���8�7���W�X����@ �n./�P�.�6lt�����~*�n!�|��K,y+0;�Vw�{�p����8��[_U��K�����J����M2�_1�S��_~�e�a�o�7�p���[kjj�Ngii��u������38����<x�h4655>|x����-]��a�]�v��W�,_��a�����0�0��
c�r8�Dbm&B����m����j�����~���'�|�W��G���'�|�0��Y�`����y%%%E��J��2>8p�� I2%%� �Ng0���2�n�')���.�R�f��C��]�O�y0��tk���f���q4G�xh���
Y�eW��UK���.��@ ���e�����'�������4NT/��s�s��Xn}���.`\���3J��U��*�g������`WSSSg��=k���3gn����R$}���c����6l��ac��9w�\g|0�����2�L,�����zp�\���/��b�����z�-�tf�H�����}�R�-�J5j��Q�F��;��;���i�6/��f0����J�@ p:�z�Z���T�L��������-D�`���{�|��������� ������F�����X�!�|�c������9�'0R��e��5�Ka�}����"z�!�6�A���Ol��"��s
9�m��V������G����[w�q�]w�����R��r����!C����g�"�k�����:��������9s���s;s)��]�����������������?��c��+XE����O<����_?}���[�@nn���{[��y��,Y�������f�K��F��za��A\\/�*i�o�A;�R����������Wq ��y�`�r���N���j^e������dw��}�sk��w��@ �yb���/P�}#Az;��!$}5O���{T�B � v�:����s$�9m�����������4��5a�6�v�������������]����of�G�=m�4X�v���>�v�>z����;��������H����d����s��w[,�����}{AA�k��6p������>��+���h��E����G�}�|K�'\�R_��/���B�D7�e6����s�}����K~O�akT�C �6�[�{	���_��w�x$;��Yx��m�Z�DX����?���,������O?�7o���#{��}��x��'@���EU|�Z�v����y8|���80�|V�aa��7�|��G���}��Q��n��	]�y��W�7F�Lw�������}#[����0#��5���@6L��|U�#�Qp��!��#L�;����6}ovK��rn}5O!Q��A�8>g������_�~$I�;wn���~����Kzz��e�n��F�Bq���-[�l����i$H����)����KXA������2\R���s�nmT�C =6i(���{o��y�����yyy��_����������>zd������ZXX�����_���7�����0,��=���h��E��?0.����������������,�[�l�S��|���%s�u�?]�t[W��@tG�����e���uyIo����
A�������a�~���I�������i�����
���{G��k�9r��]�v)�J�^���0a��	&<x����Cc9�Vd|��~h����5��.)�����t
�@��i�F���N'��KOO�$SSS�Uc��YQQ��o�;z��
g��mu+{�� ..��L��
�.��2���*8���������9sfFF�w��,�H3T��z��U��^���j
0W^���@���6�q���,�pqW
5B�c��R.�0oeo!��c���\8��y�1��z�8$.n^������	� *d-�{0�9R>��*�8���
�q��A �zxu���^��7����z���r�@ �����'M�T]]=v�X�F<--���~1b�SO=��{��@ ��i�R�\�l�o�AQ����8�`�����<x0H�.�
�)�]us�����3�����u�@X�_Xy���9�"tN���&�z����@c�6����:��V������>}�ddd@JJ
I�PUU��q*++{�(�������n���`����E� ��]"R����3g�<��#�O�^�v���~:e��"�%����������%'mf����d�N���[�C������a$�4�sIB���#qa��u�|B�
�����g��n��Yfq�y�w�[�e6W���M�@\5�IM��d|�2.�X����j6;�����]���z�@ :��3`��E�W�����������o��VV��<yrrr���_�u6d����/�����k����
.��D�c/a01�/y`��5%��k����o����h/�������/�?eO�=D m@c&�Lm�X��T"�3��&'������-��Ri�����f_�g�����A?�t3L&�D"�H$�n���rV��<���h��v_z��wg��}�=�dee�9s&�s!:
�g)��p{�F67����� wOQL��_�p!'Q�IT����0v���,��4VW��Unu�#�quP��e�6��?��2$.���u�����@ :KVV�<y��M���O�0v�����c���k�N�8�q���������q������2�EuknL��]��R��6n��{�3���$Z%�?�s]v+n��`�����D����>�-�@ HHH�����N�c�����5RRR:�IWq�����$69�%l
4����uC���3s��a@���!/�-�K>��Zh?;���-��d������HS>0 ��a)��d��+�����{s�o��������"S5'^:N��C`��v�L����`?Uo��2���),�\D����{rrr�����:�x�"��>`�����6%%%B�P�Tiy0���e�9#e��-�2�������#�p�A�\/�c~`l0	���2����X������U�`�*
CII	�z��-�{�����g�����7��a3f����7fb	��!�%��2��V(���!n�[���U+m�^�����$3�q�y��t!�M���e���U�\��]`�Ey�����y����Urh����qw4���_g1'A�IP����3vw-����]4H�[�eVW9e�j
"�!>��]�uj�7��^F������@����"�2��U(��v��o���x�~�mv�}�Y__�o}}}zzz||�^���7��pT*U\\\���7�b�8c����(#K����U���q�������9~
��$�������r�5�<
��a���0�L&��v1%�Z��R:Ixaga��Fa:�#��v%9��r����j5�V9rx�RQ�#/]�t��-����p�-[MMM��;��7�,^���;����=}����/��c�a����>�hBB��Y���[���w��l�����[��uC�MdE�����o����#:�ChH����I��O^�sE��Jg�D���|�����oe(�]�.�� 4��4W��K:�8)��gS���,Q��������� ���j7��d� E��\� �����"n���*��
8�����x�x�dP��9�v��-X��i��XS��Dw�WY�����T-�x_�U?�D�� ������f��q�~��w�0�\C�.���_PP��MFF���eFFFII�F�	�����+�������|�L�N��)*������V�����@�<����	��'��'��B����	��=zt����VD�����>:\)S&B����]� �D������w��/1�L�e�6x��>�}I����O>��P(��w���Y]��%K����mt4����������������b��y3;����{�n� <��5kZ��u��S���.��}�������Z�|�R�d���c!O�/8�������yX���E��*�s����}�����0�.S���e�`<�,��a�!|!0�c�1L�P�?��vo��N�wp�+�&Ix�bn��� �����^���Z ��/���E��]���,ir���Z���]nr�[t�B �K�a2�.C3��Or�_ee��>fGq$D ����;v��������_����{���pg�UZ6�p�\ ������u�o����"�e|Kx<��q�\�q��p�b���24.0��30��'��o~�]��������I���D"�a6�-:w�~1�6�� �@�0L@A����r=Ot�$���r�n����3a�0�S��@ �p8B�U��en�z}SS��������?N�������G�����������={��
���N�<YXXX^^>d�������755�7�Wz��r�x��H$�/��r��U'O�LKK���?+�-Z���?��+��F�
�e��^��+���=:#�_���/�(.F����W���^zw�������|�	����|,�v�H����3S����%NC������u�k�����j���'B3.����,	��5N��/��wR��
�0O)�Gb
"��!0y+Z����v<�q^����po77,S5�X���u�@���>u�T�r���/��v����o_�����U*�L&����P__OQT�6���������S�z5l����������-�%'o���2�;��,s�Po<�;����sO�$�bW-e�����W/����r���r����=:�$III�TWWGa.�Ju�m�9����/{����.//F��<������G���t</+++�w�?~�B�����R_�vIMM��:vL&����7l�0z���C���7l�0�|���
�7o�p���_���6`�6�h���3g���k����#����7r���?�x���IIIIII�xuu��E���u�n�6m�;w.���>}:����H����%����'�,Y���;���7&&f��a-7���/[���oT(G���e���q����c�d�+���-�uz�cl).���2S5����[Z)���?��(��y��@����u���2���e,s�]�2����\�Y9��5)��y�J.$Ix�"n�/"�C)�(78�xk-��N�����e�nm�ag��>v5C����W}�����r�^~��3f���2���#uuu*�J�P.l�^�V��D>�����#��5��6`�C[��>���Jn�1�����{RYY9f��A��x��R�T������W���=��O�������q�����|P���iQf���K�.m�l��Q���UN�:5|���������T*��\PPp���6��������;q�D�TZ[[���v�{.��2�
j���t�?>������q�����4r��]�v)�J�^���0a��	&<x���a������7�}L�IO�/9�8)�i�'�'/�x34f0��Ng&�H�9�xQr`V�/��V�F���SNsg�����5<MVu]��l�6"k�(�T@*@�@��Ybwkm���&gI������CD������x�����#��Xj\�e����ff���8�s8�`|�<l�>�����
���x�y1�V%I2:g��p�v%}p����HD4�$`�����~����+��r���+V���4}��������X�j�������l����ya�`l"
���	�.�aa�eq���r���,�"�-���\\
���D ��?~����m�L�4�_�~[�n
0V��l]��D�E�aN�>P���]��n,������?;��UK��L$B��|��1c��Y��X 6m��T*�-[��oP5|��,X�`�����������9�������q��}��������n��#�tR�i�+E�x��NC�������lX��XF����=���i;�F�Sj�$��g#k�Xs ������5�}�������X������������q�q
�����v��D�\�p��`i������i��a�a�>*� IDAT�����w��TO`5��f��H$����A��6a�n�?�����o��F�>��W/`������c��?���~�>\*�:t�}��M���j��i%,T]���<������fY������n:�S��"�(��O�5k��L�������?���:��t�{�J�[b��
|��6�M�<999�������:{q����^zi������g��p�\}���t����i{��U�3Us'��R��K���5^�)x;��H�H8�x�2��L�)���E; ER�
�Ixi���MZWS������3�8.PS�\	�|F�u�V��b�7��#


�h�B�033N�:�v�L�d�!�����XTk���g������D_���[vf����qqqMMM����0l����O��N��L&������)�&��5M�J���EQQQt
C���8����(��������p8:�k�OPee%�h��������|�w�qG�������6mZ�z��3V�ZU[[�-X����~����4|R���x}��S?���+n��]Q�[}�O��]��k�b�@��c������d�,]��M�����O<��SO�{�����J��������;�f���Et���2�������[��.�pSu��I6]M�V�u�]-
&L�;v�����c���k'N���8M��������[%#�����6Q�G��vS����Ki3A��a�@=�tvO?�	n����[-'�l�0���%�Q�!
B��k�C�e�Wk.Rc�t�,�p��^-�1/:Q`����2��Z����*Ab�����0�p�����svV!���|5�P�-`����A�+yd��W�T-�
w�nQ�Y	�_���a���^��q��
����������s��e��u:�g�}6{�����n�����?���)S����m�~����4<R�����i��4�c������p�]l����I���,�@ �Dyy��i�v��5h� 6<����������������W�~���:t�A�u��RW����e�\���?��#�dgg�=Y�Z������YSPP���y�va��!��2)))�����VTT�������b�e��=�$n�����K�,3������[Gb`�u���2:�W�]h:���������.�b%��qV��H�JM����`;@��5l��+�?.�;�t�q�*���.'kT]������#k�1,��?�Pp	���?`�k�n-3e���XCoH���2l�L)0��V>��#��e(�~��3_F@���F��JM#�aF�u����6mR�T+W�|��g�}������{����o��3[�paLL��)S��"P\\<y�d�����D�����*���UQ
kW�V=�c�J�F W?��Szz��E�F����&�H����9�q����w������������_�+d�j^Dg��(����w����k%Y7>>~���3f�X�x�?�������-I���u}}}zzz||�^���o������&''����/�J�1�<<O&��;��_���n~k
{h�^�������\��@���m����'>�Q.Yr2�p>66V�i�
x��s��`��	���1�PCH�	i�P�	�$���0<��R����O�X�PN�ZG5U��
�\A�+����������9��r���^��F*������&�����c@r�.��|��, �yxB��E��E�����T���pP��NU8�r�*������R��@wT*U������j�q��F)J�^�z����������+�O�a��;�^���D7��`�������������b�Z��={�����&��6u�����#F�x����}���c�����ku��UQ��Uz�L��n��B ���_���;��V�m��pAAA��	a$:��v�W"tQ�������lc�������WTT�D�����c���1�������555;v������bJ��j��y|||AAAXlT*[s����O��o��f����p�g	�;p��+���l0�p��0R�����	��]��%''',n�����``0�C*�\����p�n���Qy8*7G	!�P�!M&���xi�UqX�Dy���C:��a{RD
�Jue��`��X�()x��[n
���sk�R� LF2!����idp��8��8u@��<2Q���<b��|�L���D�
��
^V\�K1�f�&���-E�����C�l���{���g�������j3u!F�q��-m�0����6|��M�`�ZI����7D@��@ �H����9sXE��W^Y�r����~����?_�|������y��o��6�����8��h8���M���\.��[����/��#�<"�JY��mD"�����v�����a�N�m�u���I�1�U+`��4f�.��������G��0 �D�P`f6�;�s��aX�K6�9����#4)��/��S�8J�pc(~���/��~�pk9i+'l��������������n��:�`�
�.��#e���nI��$F�pZIPq�SqOPq�	M�J��D�]�����(��"�4QK�Z���	-����4�(��S���v{B_��9j��������Lo�����|u�Cw&`�s��D����@� tL�;�z��������&��2��O�-[��X����.�u�]'O������7����6�CP���U*�L&ra�@����e�?h4��9s���v=����{��]ZZz���P���Im���&n�z#�7�z������n 8%����z���c���:��'OG�m��o�IHH �����;b8��&r�)<E*W��]��pe�0��b
M�iY�G��q[�����"G]����QW��wX��������N�������\�V{���M�%b[T8]*����:��5p	:������h(�aw�49K���W����5xbcc�.�<Rn��gH��~��}K���������w=������y^�k!�@\Kt�0��Y�Mwe�-�����
m��\����&L�a�)h����2����m��6iN�:u���������N�2PWW�R�
E���T*@��������8_��S�J�� �C���S��'��4�w���<�_���)���e������(x�t�"��5�����\Q����c9�XI����n��5+5���?��h�X�>�(C��X��p8�E����F�8I"^���*���x��E���������d���#��d�K����j;}{�sO
��4������Z���u�%��D?#�!�%�=�K#*�fr���WzI�{sj�,t`�a$�!]��f+//������D�!tQF*�@YYY�f%%%YYY2��m�`�j���������9[��S�e��y����|�E��>"]`Z$�4���r�{]���2��`�>f�>v�(�������@!N!N�T������:�
�����"{]�����
�cq�Y\e�l���a�A	�	bn�w��4z�	;����/RF���C�m{�yl9*�������yY����]��w����@\5`���+���/���'5���r��M��n6H���VU��W>\�����*2�'�@D��YW�eo1�SC���E�^7h��c��]���A�@Xr^�����c��?�?Oa���R����Cl2H�lz
��.`�'����!���x�k�G�'#���z��
'7�5���"�n�r$�A�#��H��c}#�4��T���U�d�?�k!��/���d�@�a�����y��������U���W�.��R�AO8�k� gR��]��5����/�!�r����������bG.���1�����
�T5}k!�@�P���7::t�f��Be>|�=�<���_���j�>��l�������i����g���j����Z�0l���~����������J��Q(o��|N>��;���}/�����^��^�@M6_�_����h���iZF�d8���B�Lc�=��F�XOw&H�&S5���������h�A��Ft�EC���v�w)P�~Xw��`��`��	!EG� :e0���]���~x�eFx����lD���\5fm�O�)������G���:����?�D�7��~��>u�X����yQ��@ �8��2������{����o��v�������[y<��Y������C����:�)�N�����f����w��X��������2eJYY��m��k��`o�0���xP����N1x�8��}��{7���tvO����V�@_s{	S(���w����oo���,�-�}#-d�I\���|y�L�Mc�>f�=E9�B;���%�G�p	9�r�������Q�#n���=���?"�7-��C_�n��5�:}m��j��j�^����C\eH��nL�\%���\s�I�A����vk��K��D���D���O��g�-�c��*����OfW{�/�[cq�F����s�Wj�@ �k��E��q��
�f����[

~����g�VVVJ$�����o����1��s��
W�����L�2e��]�Hqq�����Vk�m�9�����)����a�����K�����^�����8��U{���
�B��ibs8�� �v��4�E��BG}�i��<R�	����T�J�PP*���K�.��
�6r�)��Y�
�;�92��dw&��`m'�M{�L{��t���U`���]00aE@���]s�|6�O�|��&������*�n��K�(4�@�[k�������DBd<9��U����|��C��y%I~8�';n������/��2�������?�����5j��Q���<���f��M~~>���Le���N����7b��WTT�o�>��	����)����O�
�LTx����	���.k�������x�Qv��"�2M��42�M{�ZK@P[n��6�Ju���v+�#B����\&���
�,+���r���5���������2A�4aJ� �pPN�+w]�io�io�y����~�=	/cx�g������/�W���2$����k���2*
U.��VF9����XNf<����_�~c���PF{A���d���jF�#���,��o.~�]`�t(X�@ �e:%�x<���{n��u3g�0`@FFFZZ��d�x���������7��]�`&??????
6����"�
��S�,��T9 g1L��M�?=��Z�i�]?���`����0.�i�<�i��-(���g�`�P�;{�� i���n�`�(#J�^�r���`hYf|Nl�������[��2��4~kv�
�� ��T����6���G����OVwv�[�|R�hE������Snb����"��������e���G;�}���_�LN�K��������cvO���@ ��)Q������6CED�����!STg�@�N�PZ`�k[�����X*�Pe.(���7WW��|D���#E����+��J�<<R�_����f(w��P�x�����^D����_�l�q���8��j��x����gI�iO�iw]�o4s��-AYl�H#��KY�(�0^m�{�|������M���r�(C1T�G���>Q�A�������lK�'7���n��b�V��(������*�[evG��V������ D0�.�x<X�|����>A�������f������5h�����u7��v����I�*����iJ<F*�|/���h����Va�����V�/�2J�K��F�������������L�^o0��%���q���dw�9����y��4��i��<������+M�:�ua8=��(�
M��K(�]��D��������kp��T���0��^�M^���n����J�7��-���W�<���e���F��^<T�p�&0�3���e&Ow�z�L5��&e��63�d��,�w�D5Y/_v�=#'�j��"\�.�444h4�&��DXh��,@0������?O9�]�"��i���&K� ����f��YO��<�T�kT���M����������Q_d��A��-aE��o�j�s��Cb�?�'V�0#���qA��s�9��2������)��w&�&)�y�sIe�bZ�b�
�?
��U�=U��-��$5�(X��������0^c��H����X���J#D�*W
�l����|�C
��m�!����q�i��~���4;��
[R����i�jWv2�2��2Vr�X�-c$���|� ���:h��2�i��q<&�2�0P�-<��8��U�d����H��lCC&��\�~2t���@t��E�/�������w��h�"6jZ
�a��	��G��l��O��2�����������4���l��� v_����<F��8������
�������Pb�+p�����i��q��"�����*��kp�
�d\�e}�����(�����4�]U
~X���*b�0���t�r175^:.I~g��6��4�0B)�S
�r�_�8K�M���{��}���%�7�t�D��(�������tS��UKP�LW�'����m�r����c��?���V�1��>Z�`W�`��E�����s��j��(�c	F������;�[~����FKn%����>����m ~�]��D[�3}�DY���J[-��BY����B�X��N���"Q�L�����v��F	]�y���8z���k�"]&:\�|���/H��`���e(3�kn��a<yjs�S�@�������`��/&��	�����8Ku��"6���;�P=�=���I
4	.��d����Y�(���W��[1���o�[s`yh�Z\e��������0N2&Q>)I6I�Ih�X�K����T�������Tws�B,�|R}C�G��<>�����?fuUt�W�s4h��z*}��eh���X���~_	���������d���t��|�����8l��|i���������@���8��?���f!fD������ �
B��n�m��F��B[��"�o����2&?\?5`/&O�Q0�]v�N;��yg��-+� ��/��Bs�D
UBe�R�#�<��k�=��3���������=[VV��:���?�DtV�i#L�g�B�E�i(uJM��{|b8O��g�	}G4zL��d�a�G��J�U}��>�lo�"��8.4�.p�9��H���a���"e<fojF����0�]�����i��_�9�Q���5�,]��mU��U��GW�K�&��T�oh�������I��I-7��DD�I�M��^����h�i���j�0@w�ch�.��[�)�(#�������$:X�����~.������:�������/Xf��������5Z2�V������������N���p���	��	��	Y'�.8�A�������yj�����j���$�����DY�jA`�L���#�����n`3������P�c�x�YE������T�3n>��q[)�l�m!��PPdS�3n��W_}5''�����G�8q���H$��s��'�.]}#M\\�V����G����!�������5{���.\�?2t���|��Q			V����sG�y��7Cp#HBe�����o��}��a�a����l:���`�
��/X��K�R#����%t��������3���il�`��0_��s���i����~��}Q�Png�{]����-!�h�ls
D���*�v��W��X�.�������`�����eH�2f�L���
h6���v����(��$�'C�A��"��K*o��_m���;T���Y�%^!���Qc@��TS�����XQ����+�},�������=ai~������Gnr^,i�������^X��������c#bFKnJ������(_|�M������e��L��E�PD��L��E�P��d�4�F�G}y�`;DS�*+�\]@3��jb�]��J������j��h����aB� IDAT��]�fc+ms5�
�(3��@e��8N"��,���;��z���9���I���J)[
�% ����p �W��W��+(/.��! ����a���ZFiK[��6�����8!M�6M�4
�}�|�:�o�|�ONJ�����<>M��dll���
����$Qw�

�1g!��(����`�:T�XmTs���SSSI�k��)0[�t�?��Os�����������5k��9;v����]��!Pq��e$�� ��$�����\���<��q�=��q�B
����EO4[���L���A_{����h*n�+�4�Y���~Pa3D���559�A���y��1#X�7�j*���K�&���������5���T~w��;�Q���=�e�~J�A8�����?r���FBs�����H57��-3e�z�A�e��e&�4g�]�m�k��l
5�����k�c��k%�J��1&`���7�z�1�p���P�5�=^�x}�����,���~��d<x��[�
D�9��!x����
�x��<�k>��<�+�	�p>��:vb��py4S�q�� ���Y�����%'DMj��~��
�,���3��`1u�����6?%����{�=�����7������G������������s��}��w���`������D#G�\�`�L&���_���������e�bq�F�ri	f;���1G�����J�r��g��4~��=1���>���$a���4]������.\A�c&M�qIs�N�Re�S����HcR�8N�MMv���C�l�fX�0a�������`$��uGJ��\�7_�������+��3;`�}VF�f[��)�����D���-�����AD3,F���TC�=]s,��`8G�Xt6W�>R0l���&H�o�x����zs����A�� L4�C����:�H��#����O������L�NYQGpp6csp��.��l���,�q���|<^��Z�N���o�9���$	ddd���!�JSRR-Z�v�$�����C����������S�L&��segg'''�ca����}{zz:��X�rerrrWx�|P��u\�8PT�y��4Nc��M����;�M����$a���i��j�Y�i���Nwz�~��'":�E�`8�0_����U����������qb�a;�w�[}m����LQ5�3tQ�e,�Qg�������A�2�����d9i$��K>�*�����A���#t=V�K2
AY�on�U�5j�h,S���'���.z��'�e�&�T9>�oTFLon<���'��/�O;^���x���OFMh��Q�����=U0��}�5������zD|_7��1��f��q>���n2����DR�������O@�����W��i���8�u���5xYmJ3:E7��c���WB+�^������&((���s>>>������i�&L������;��'SSS�s�x�b���K�.}��,���?��kl��lb����	T��p?$a�V�i��jo��f0������W���h/L�Tr��M���P��t��|�#�j(�_�4*{��U�y�������i�W��*`�g�$/�??h`C�7x�^g:�!�8�O�gQ���d��.���Y�����>s�\��m���d�����~�T�]��2\(Y�F������b�qk��}[:��H/5]�1i
��z"���_t��~?�������3�h^4���B�5���<�t��sX8+,,,88���&/'p���]L����s��IDH�4�	��������h<��y*�	�~�4��i4��s���S��s�3-g�kG����o�Yf������{������L������~{���R���������V���nC3��Y�f�������4-33s���?����E�>�����wO�4�	g>���>��:~���(������g������2G8���-�(��HMM�={����rymmmff�o�����?R����w9r���/��^x��s#"�T�z�P7}O�:�`��{��}��i�{��2�F<dP
����}�#��>��h������n����6��=��f(�`���4f5��~���8�I�]y�;��Khl�
I�!o7l���;%� l#�%'���L� I����\-y�pE
���
�EM�����/���Z�A����:"c ���W��x}���t�n+��$��d�F��h4;w��nw�,�U�v�~z�z�&�=�:��X,Vll��+W�3|�������E����8�#8|��h�B2�/�1�Y�z���l;�9���qV37�3�>�������g�]��F�Q������g��9s�������%����{wJJ�yf��HII�}�6t������b�P����z}EE�t�/���\>�o6>q���a�:�]���t�w�}���/�gd2����������3����3�]��5kd2���k��:�����*�R�f"�9��:��%t*(�a���c���;L�z���;���$��=��l����_����S��V��:�0MS����	732�����������$�=�����
�����e����@k��q�-
�,d5�X�,��%uFm������M�CD����.�C5G["##���[ZZz���P��
�h�����bMW;�x��������~�h��5����YUM4���e���Ze�X��RX��p}{����J���t���������Fw\�#VX*=�;�;M�ApK\�q��^��m��{�x����Y��!'�WY�=��O�I�&�$�j���G�2�#F���L&s������/6�[�������k��-���[�p��3���g�
��^�z�����/_�hQFF��:��8t�-[FEX�]����_gdd���N�2��g�����������o����}W0b����{������>�z��>0�)�f��	`4o�rXW���0�_�������e\�0,���6[�
������6md��lY��7�;�%��p�>�Z�iZ�nA��|���|J
-�f����1z$;<S}*��Q$��hW$������g+�Vob����0�$�V��
T��x�QC��?(d��c9Iq����E����W��>�\��X4�UP&�U�LS�U�1�-b
u5���va�w�N4�>^�V�I��4�Tc�����9H���cm=z�X,NKK�dd�Q�����)��O�1 �`��/��	

��tc��=z�t�{�����w/��Z�z���C���|����z����~��0:���;N�<��������)A�c��M�8���x������������w��/�����k���
0[�|9|�����H�3f���i��.��u>(��K/Q�F���YUU�:�<:�.�d2Y��l68�=�����1�f���ce]C��AW�h�J�
���7������8�ug�///�0�=��N
�L�;���\.�X,��:�f�L���LWzLP��4�C��F�%�Li4S4�O�����G������4_=bu-�v�h�jI�Y"1�Met��T����?j^�gV��
J7����_rq>�!���������?���G�c�8�E��>���P4h����P�p��[��&��2.�E��a�cV�!�j3S�$3%�s|z��2��#��j������1�����p��7@�x���#�a����H��������s�2$"""''���J�������,*--]�l����������<��1�0�7o��2I��~��/����/��,��w�'O����J�Z�j��O�q����g�}F��JKK���.�����/��n�JMMus<����t�':����Y,�=��!a�4���%)���]��%TP������X,�a�yi<���h4.��`0:6�4,�;�N��T�2�T����8����Q4Q.�d�0A(��:^���T����TH�6CA�A�h����[H�mMNO�)})��U���Z)��<`�3�y(j7���\��E�F�^��8�����Q�
�P Fx,<zL/�J3��4yG���{�0�j����&��ogyX���=z���;��)�J�����=�Z���4�����dm�T<����>c�W
���lR�i) `����|^����A����X��}}���������0�:7�����?2�����m{Qw����
�s+���Q�"_�u��5k���;z��I#��c���j���x���k����	<��y������YY��G���~���O���0�BW���:��l�2X�bEmm�=�`��Ak����������G��w�=��Ieg-\��!���F�)--�����R*�*���{��C��5�*����W�U�Y��gO�QRRRY��P~~~4�=Zb�8>>� ��{�����---��s�: ��	lhhp��c0			��+��y��li[���g.}��dG����vl�\��G�����&K}�"(���X]v�.�� |$5d�<����K
 ��a��A"��0��}���Z��nr	��`�������1h� ��O������+��Ra�8I�tAX
��,y��eo��u���-���/���Z��7���|ZJ.H5Dz��+�W�,Cc.{��25�<��C��;S\�?�WL���71�P�<P|����/{iMHH���}��J���������$YZj�C�H����@�^o)O�������w�=_���������=���bEEE]��d�7GQ(,����EEN�����q��:���y��M�!ZB��{�/����^��f��<I�mj����h�Z�t:=$$�4�j����nvv�����=�j����C}�8j�r�M���j������R��5k��yj�}���^{�K����P�:T�.�{���X������5t��@<��F��"K]����F��V�|�/u\�.�l���PS�cec�����|���U����@��5$i<}����*��Bh]�y���A�������L��C7H�}[N�w�?d��7���4���BQ�Dq�TA�&5%D��$I�=�+�K;�$m3q�e��@r
d����SK�
{�d��	F��JM���+00���6\�^�i��:n�~��������y*:I����1�T�g;��n;������vq���^]gN�sO��y�f�������������c.RVV���!"FDAAATTTxx8Sin�������@e�<
��A7g����QAG�]Uw�s��Ea&M���w�Q�b�/_^�h������%K��mN���@T	�*4��\��`�m�����x������:��{��������gggWVV����\p����a�n�P�#���;�Z����|mTw�`S��]�n|k���x���M��3\f���?�t$*�����?�G�UDF�/=�3�\�G#28�-�y2d����KB�l��w("���T"l��J���m4U�l���F�nk����+�1}�R���;��p���+�8m��/�}�����~\Q]������(��5�%T	������
��t(u��Iw���DGG2:�Y�p���;
Eee����������t&(��/���)S\��r� ���L��q��=����^�����������!C.\�0s�L�X\UU5v��
6|�����tl�c7�p��(7X'mf��3eh-��B����g	j��u������1��=R"�0��F<��Ya����
�/
cY�����_\����0�&2ec�w+"���&Nw������a���
�-�
�0����sn���Y��z�-un�([��x��/Fs7�#��f�`�w~q�G�XY��p������;`.`	

jm����w���<@nn.P�\���b��q���P��KJJ�9b�l���+V��������c��M�QS���/��?�y��������uk��M�i��(�-���Q��((�x�h]�2j�(�L�������u��NIIY�re���g��M��������O�Fcrr���G�����������i�`�Ii,��j���:(��+Ps�L'���Y������3���O���2�����1��i�I!B��?,|���q��.�])������,g����������^�
'L��|��k_�$
�U�����bh45,���:thmm���{
��`�
����i�U�5j�h,�=��gc�'v��nW~��.5���x�(�?��R�Q��o^����H�����G���i����1c�*X�R�������F����ODD����k�r?�����6l���CY,V�����(++37Zr����8N5�����lT����/��v��=e����rOl�@����>|������7n<{���%K�}�����u/��A6,�e��KD�:y�[��aCcc��}�bcc�4�4iRPPPZZ���K�o����%K���y����
�t9f�)��DT��ZI�-Ms�I'��Q�����x���cw�����}#���:�7��X���QQG��*"S��v�&�B��=�pq�O,���^�+)�����T�(>���g���\qf�9"��R�t��'�LC��# Iuy�y������7�VZ~��������s�<"�/�j��`0��������t<w��	c������cd��.����y��??��3gZ���s��m��w!����@��S�l�M�>] ���W^y��p(S��?����������x����C��z`�y�`����"����@ ����+z��r��	Z�;�o�n�f�}����W�7�q� ���
��t9�-�?{��M�u�G���J���
���R��v1f������FD�A���"�����P�|w[�&��/�+�E�����P8�n��3�9�8q��$=�eI�m��+�TW���U[����f�Zf���U�T�^�
��9>���Tg��]�e������?�W0�����������w��i�(��T������������zS��_|��k�q������7��%�a�;������R6�T�v9v��;v��s'22��/�hhh��e5����w�^�f0V�\i^�Q{2j�(�����6�&M�����<��
�_~�������Q8_��p'r^r��f�����xP;���e_����fP��&<q���daa!�+�H�J�=6]�<�.�&�OvC�J��Z���9A��T^������m�0��`��u��8����u�8����>i5_\{�l�lu[*�fX�P*��N�a�U�U]�Z}�u��Z�"��MW�Ze.��}-�~�����������V��IS��~�J��x�����R������'*++322�tz���)Y�����qsB\aa����7l�p����������}�z{{���������vi��.����������G�zyym��y��W�\	

��������������.�
�dggWT�Jb��}|���6�N�8�A��:���~�PHGi����J�i������C����z�f|}}���
�A
�9��k�����o(+���9v.w�����fOz�K�Q���E�^�F�.!X<��YV���\���9����0�|��b^M�_��?jY{}}iE���M��Ox��wQD��a����E��(��)c���EL���C���[�k�C)�����#0�����
;~�8��x�������H��
F��V�H��e��A�����aX||��q�������?~�x�m�r{Pp�<\�pa��!TAS@@�������0+..�6mZ�����.���<:J���h�+��@�LW�� �D>��"��O�~��X�?�@x.�Q������|}}333;�����������w�5��b���o���hh�� �5���!���8��~��.�u�F�2RpdD��:
��R"��s*����	4/�b����
P�O���@ 
�������A:��d2��Y8`����qR,R��1���v IDAT����/�x�
W@�_��&BK0
��z�0��?`4h��@�L���T���F��@�p�c��������ls���p��Rn������:�$�/�������sz����(�1�j������?Q���r����������
����#G�������������z�����L����9s��E����������/^����r����p�����������d�LVWW������F��w�����%�'MMM,V���F���dv~;�q�|)  �h4�����G!�H�\nUU�Uh
a?8��6�7�j���%���^!��q����Z^���H$��&00p��E�I�X���c�6�1��������#k�����:a��-H��Rg�o�����b�����S��?�%�%�?��:�AE���@ ���m{����N��?�W��������mf������'A��F^|�xH�W���C#t��,/�)~�%�4��I��N�w�o��� ,a��Fv��o��(j�<�������$a������������nV|nV��p����j��@ <
�$322222l��?>&&������~�2���T	Lqq�m��9����]��u��������{����:H�\�t�k��6k��
68��#���O#o~��V�O5�w�?���F�R)���B�U"�H$����mrss��'��7�|s�B�^�L&��f�8�3��,,"�[z��������	4S���O^�\���
��$�'�at�U%�U��}AS�����P(���kkk����~��p�F��l�t��7����e�Y��'S&**J(����a;.��������_�~p����QH��1A0�_r�$��:��<�b���<��4��c�����0��$���s��]��C5�F��i�H�V�333;�n��'v����~�L#�e ����u�UsP���L7����G)�@/���k��d��&**j���������y6o�<&���j�=�]�!P�\�W&�AW�}���Z	�l�?�}� P^^.���b��7�D"*	�3uuu999���r��Lnnn{��p&��RG�&nUd�T��xF���)(���i��aJ�M��'V�fXb}���������aXpppSS�=��L�����~�1����#�?�=�Ns��1���]��
�PXYY���s�H�P(����8T+�/; ���r�H���~t�l%	�����Jz>G����B
����m�W�]��������:s2�
/f�t�ePF�%�Fm��VcJu�5Q�����t��0
7�W'���%�>����jvurY��l����?
�G�]�x1Uz3g���_��YSWW��EZ�����������M���:�������������o��L&���V����obb"P-��!��hQR�X���j��@<�������EFF����'FHH���c���
=�y��D����F�l���i��<�?��)��~�P�8�&����{\�n�!H���))����A��.q������EC��&��Y���������Yu�l�(�$����3^�T�57�.o���Q_���	6�E���K�LTi���eI�p��7u���_�z���+�a/���j�����a���`����v�JLL<x� :�ta�g��?��:����^z�%�OLL�r�J'=tn�Nn��t��t'�]�GEE�7�<d�X����j�p�5`�g����Ew��<z�����9p����#G�M}�S$''�3g�P�X���
�ty�EPF�!.��h�03�h�zc�$a,��R��1�I�&h�7�wO5m7�v3��x��Q�'���s>+,%�����tF[7���� 59�'.�E���\+��F�*�<�o���^cI#�[���W~���Y[�tH�2ed�v��K���1��V�h*��D���U}>��"N|���B�����@ <��������>|xhh���WQQ���g����{�������������ly�i�k����{E�q,(s���4���3�j�o��f�)*�j��M���N:��#�$����I��
�z!��M�6}��g�=���+���������?�`�MW���k,�D���
��=c�L�v����mZ�����\�;�}j����~R�g�k��4:c��;c��9�c�z"�8	�G�?�=�@t�^:���qno�Oco�h��R_=_���y�<����t5�57w1K5]t�b�+������B��3j�h,�|5��g'�2p���X�7�5t=}?�����e���K�.]���g������;�m233;#%�4����yp�u&��cA�]�v��e���9sfuu��Y�\��o�h�E��>J���H��x����\�n��������l�2��t��������m����`�I�[d�����)���4S��8vt{A(=�L7��O
��_Uem���=�&}���QO��������a[O�N&IC���9����^��I��#8��	
:��������~W]��PxH288���Et-�VA�3e�$5�Y��dj�yY0����VSC'.H<�@�k�WF �#��I;���O����]�B�����u�V�����o��*�),,l�������������1c�K:��q�x���	IJ�����@<����;v������]�N�:���/fgg�;����!����H�3�g�Mz���"#����VG6de�4�
v�DF�����gzP1f��������+@�dr����2n��s���F�2Gd�L�
���������)���mFd�jU���s~�p}e��}�
��-��n���._�0S���+�8��\�Iv��&}s�^~��0�KVF ����j�/����hMmm-��j=�b��l���$""������455g�2d��]�D�TVUU�;v���}��y��7�����SsZo���ekr��#�/^l/���4e�����A��X�7n:t�J+������\ -\��W����R�~��D�sbm��Xt�"}��������������r��T5^8�3���4��&���_.�g�:��"�
�^���
��z_ q�j�_�W��������n�� �_@���h���(�E���2j�_�d�����(��o�j(`G���Woq���x���>����'q?~�x�>}>��cj&""B��J$����8��M�$������'�������G���?���OKK����iD�x���� I�?��j����$/^�x��-Q{l\��v&�:������cGw�E���b&�������2�5]p��.���Xz��A������q>�5����t�c�4������o�tZ���������m]�aU�6U�B�@km����k��fg��eP���0�����;s��������{�}X��
�="��T*O�<�]���&�����L�����C��3gN~~>������7n���I�&���-]����IOO_�d���������A�h��y�T��Y�j�D�f��*�Y��@5K��	`�	i�Z���P���`�����Q�&N������zTFC~�V&M�?��LR�J�Q���_7z�H!��L����U��`d�oJ
����m�W6���F<d��d�C�4e��I���8>�\�1����O�}M
�X�����Um����C���W7j{eggk�����h���	

Z�b��C��mGBCC1�}���g�;�o�nY��}����W�7�q��\�|I�D���L��-�)�@ �]n����6��+l?%�"(���:��A^O����?���B
9�	>�����Nw
��k9����O`���T�jvv�[�F�KX��wn�?�*o����KUW~V]�U__�.��10�W���F_f���
��z��.KF
9>=ko�w�K9�{���g�P����VoA�2D��X�6�"�Cn.��@x�������[�5g�sx�����y��g~���]�v-]�t���O���'N���,,,����r���M/�)X�����
Fh�����B �N������
5Qa� ��X_Zc�5�mj��)��]�=��w�sW&��j��*�01�68t��?�]z�a�b��������7T��Fd47J�}��:��7}+��A�G]n`7/I�D���H��7mIW��A������^��0�tW-�@ ���&Sf��Qc�����/��%���px��78����'.Y�d��
s����~��������IEEEXX����Ri�z���FFF���=�J��P(z�r��?�t���$}+�WhY	��!���D4����DDDP������8��'d�f��F������T������I�Lb��=��q�L��3��� ((H(vh�&�?[�%l�BgQ���&*����Pg����m7`2c���~5b��5FcO���w�{3y����a�����4��x��C��UMW����	�(�Z�d�xXH�����)����*c����;�Cx m���w\�D�.�*�1�:���E��q��,SNO��Vo&H*�D ��`ep���O����-[f9Oe��$9{��?���h4�5���?�9sfII�����aDP�TVkVWW��������?~|\\uTVVf��L&�.�;�8&W�<LR��=�=,m�r�K�����wn�Np���G�v.������������+
�k����b�
�Kr
q�P����4���2vh�R��>P}u�����!/��b���i�%�Kq��tM�'x��a��#"��5�Q_���=�0���)���������%�J�����ok(<�Z#(���L�����&�L�4
gp}����� ��e������a��9�\�8�@ V�����S����|����z����khh��c�������7fgg�������_}�Uyy9�D�:+��)+�Zo�o��}��Q���O���999:)���B�J����r�%���7���IT�����T�j �t�@ �BKn 44�F�i4���;
���q����3X,V`` Ayyyn�D"QSSS�v56#��RiCCCY�;����&��g.$  ��f������;��D"!�5^K�tymM�BoGP�BV��L�{��y��0��0����������#����������K�����_J��C����h�(�Iq�TA�o7���#��,���.����O!�V�>qf�4e0��V�h*��*�r����^(`GQ����Vm4�HR�@ ��`e-Z�~�����?��������N�:th�>}����T*e2.�4��1����hhh(--�p��^����
����W�^������0��iM5�\�f����x������.��C���9��[�*+;�<�<~~~4���{�v�Xh4�y�]Blllqqq]���4�"..N*�VUU���1������7�"���f���YV5:D7��y,,��`��������di����>���`���&���{���=���n�S?f����%D���G��~"�/�r�bE������^� pG=^�8C����������7Ng�O�h;�p�x4i]������E[�G�p&�r}z�0(C����O�C�QC3(\�Bv��/X�@ �<B�III�����+(�7Q<++bbb�J���V6�^Ii�����2�>��KIJ}Z�9;��i^�`QG�P�vd�d�[h|����T�������+t��~W&�BkP����k�9�`%
cu�W�Fg	c�������a��&��fD7��.B����-O�91Ed���I�<XYY�V��]����_���[�~}vvvee����g�������M 	uy�=-j�R�U�\g�N��
���G �#���={6�[��j�0:�N����CMR�T�%22����`P�(�����5�z�����g���H�at�/��Q�����K2��JCs5�CLP��u��9���gLa�C+��&��c9ct��}���an�\��p?xp���z�[1}�4q:���aG�
|.C�~L��������p���(+V���sgjj*�aW�\			�;wnvv�YY�r����3g�������c�n�����?�\���f�*_���h*mNjvyP�$��K�6��������-�0��A���s��]�vY=�P(�z}YY�����aIII@��8pF�mi���,��9��'�E=m)(��J�!�[=B ?l���i�^��I��Fg_����l�q���������w�CK4�C��k�Y�r��F�L�pb�����%O���8~M����W���{���i�V��5�:!'A|p��P�x�d��ai�K0`���	��>}�B�HNN���[�~�H$��e�u���l��I"�����>>>111T�����<x0��=6.���'-~�J5D����l-�b��E���ZMs�`���(Y�@ ���A�����������R�t������<�L�>}�\�Be6m�d0�{�9��0�)?���z���$�qx�����h�f��@ :�.�[\8(4v�.Q�����������9d
��Jz=��"�A�x���i�7�D������]�8�8��������H�go��VS/�	�!�8Hr��(���^H�����G��^z	>���M�6


s��������g�^�`��IAAAiiiK�.5�����d��7o��=6.����J5D��={���#a�l��$i�^��y�a�F�<�*�@ ���eF�iiim>�h�"� �/_��w�M�0����������� ^{�5��uee��u�������3q��!C������'O�����m�[_����
�o�NR��L�y���@t��A;U~),�~�� ���s]s�=p���.��������/�@�g|��Q<���Lq���Eqo��1?�w�,IX�f:U~�����D�����L#���\��c9cI���	"�P�{�����N��~t%&&���c`�������o�����q�N�C�j��H�F��p��������8��e
�U��C?q>��8��� ��������O������P�����x\�|��}�jbbbH�$I�o������53|���ft<�����#����m>z���)S��]���W^y��-��������={�l���oJ����'�k����'M�����������Z~�0I�?�j�D��f(-�a��)c�������K�{�����8�������_���Y�qw�7���_�g������6v�Wn�)�=-���d�LW[TscG��#��� �d�GK���d���p'�w�>}���3��u
�jjj 88N�8aiPXX���&�H�J�=6.w�7���2���b��n��ik�Y�Pj����X����8
	Df��!a���`�D�_�Y����@ >���G����v?��'���������8p�m��;w���)))111�����������i��)III�
b�X7n�8t��Ng�JB� ��b)(����R�E ]�u�����[���S'�	y���/��S��&
q�4����<��_�$Ia��	l@��}EI����VWB�H�1�%��a)�~o��������������������h�R����/���nc��U�'��O���S�N���TTTX�UTT������*�J{l\�9�1In�G�c����.��"(�2��)P���*���p�}����@X��#�%c��a0AL�:5--����[\^^^�����}sF����
���GO���F�������m��$y����/�����	��t�8�����/6]�N����.7��\��{5�4TW��t)5�c;��(��r�Wp���J����������QT�W5U�p����C���3��������BQ�Dq�TA�����QS[ss�*s[��?I��I=�mk)���3g�g�k�t����<y����q_�lYyy9�aT�E���H]$���fffvhs��u��X,6lXPPP��P7�������-E IDATv���Fi�9&(�t_���O��:��K]��;rp�#�tzxx�C�tC}T���.�4�K-P���B��t5T[R�\��h4`�XO>��;�4L&3""�=g��.�����Z�F`���r8���o����[A�PtQ��#�D"����;��E(�JSRR-Z�v����;wd2Yw{�n���C�&6�H���������F �`1��I�

Q�H+�,���^�b'�~)4��e����25�ex��^Y��e�V�z���^e�� �����s���9t{�e�����<)��*���vCk ���#��m������p�'= �G.��~��P_+��F�JK9�'�b���S����+?���q��H�j��P��"���I�&EG�>�������|>����j�	�������h�6���	������8<!F�[��Y��dS.�5�z��S�S������Ve�0���N��7�&�\Z�����lv�F�����M�*��A�@���V����iZ=2J7�����s���{� &PP������j�Q�O>\w\����-JR|�vg�@��"(����}�N��x��ojB'KzQ���eo&��$�����r(5��KS"�=�QW���9��x����^����h���$����V]�������3�3h�>+['�(����Ns��F{����.]�b�n������=z�z�h4*�J�L&
�aD"TTT�cc)73s������������������{���C�0��B�LC���~���#��������1��;*	^QP���r��d�i`��gD��_3d�:�~�����z[Ov�r9I�]!���P��O�N��"��			)//W��n�+,,,88�������[w��������_�~|>?33�����1�v��	�6E���~�r�U����#��{��{�}�����o�2eJ����~����K��������Z�*;;���p�5k��/�O��233��]���?/Z���O?��{��I��p��?������{�:�����X*�����2��r8W�%�Z�j��A����~����{��x��7���+j��f��1#%%%:::22���������S_~���_,���>����^��3fDFF�t�����~�i��
�y|!!!��~�~�-s��+Oe<Y�T��IT��#�_��?�_�
��2���h��q>(C���/��v�T)�a����Zo����0��9O��:!��Z$p�#��5Xc���\0�4P�0U�0�V�+�h(LWenS]��o(�z�W0*9�'.�E�=A�3�>�^�1IZ��Ct/�d������z���~��G��E����r�L&��.Tv}ii��6f�z�R�,+��}5U�������#a,3e4�^����2s�9����6���+���i
;61�\{s�2���+������W/���jm����T�p9t:�$��xu��.��p�v �J+++��A�`��j��:�%�J�v&��555N��f��((C��M���E�uU��I$[����i����g�]����������g��9s�������%����{wJJ�yf��HII�d�������X(��|�^Oiu��Psss-�N�81l���lgf���,�q|��	?������>�,���_~������=x�`BB������������_=55���6����?��<o������c���2eJ��r���C@U7�1�qd�����m|0�p�����,�@�����������.�f�m�F]z�<�Y��)
��_�����u�����4�O�bE>����/��i8�8����D���"y^{M���ce�'����+������qn����ZEdT�k����C��48�IOOOLL\�v�UD��T"##-'1������rc��L��ZFd���LHB]����"��~��V�o�1�&�V��u{!$�"�|���C����P�d���c:\��mU����g���$I�Z�j��1&L���k4�L�������)��[GEv����3�<��c���g������v�T�^�:  �������������������i�.]�I�~�i��p��g`��=��.�'�X�n]jjjBB��!C���544�����7cm����/RRRV�\��_?oo���Ggff����'L���W��W���,�8���|�W���x��pGB&�@_d>�VPap �=S��2%��PL���c�b'�=��"�
U��
iN/��h�s��F�q�zo)�����'r�IO�c�c%=��{>����T�T]����/e�7���C�{�Z\����F����O�����Q��W^8p��={�N��^K��9r���4O&''�3g�����i�ZX?��X��R{��1c�K�Q�]�������%d0�B�)6�S\�_�xV�3���{��8�:�@<�h"^��w��\��:�-�6�;����m������k����w������b�V�^=t��B������V�^���oS������c���'���
$In��u��%#G�
������@IM�_���a0S�L�o�����^3[������m������\�d����_~�es&���������9����w�������W���,|�t���A n�h�k���PB[gt@�@e�)�7W�8��KA��;^"���<�GsW&��^�{�N���\��+����o[�0P1p~�+iqo���A{C}��������o��������D����YEdj�Y�p�x1��x,�������M�6���{�,-�a������~�������Cvc��s����{s9>�n}����u�x�G�+�?�;H�q�k%��4!�~�k�G �d������������!C"""��9s��R�|���,+nJKK�-[�F]���[��`�?�r~��iP^^n�K�����������Z���uk����^��n�:����"*������;����2e<y�4��NB�R������8,�L�m�h�n2�;�������PP�����h�w���7�dS�?[�����/���]J������FD�I�M���s:��|�����+F��$N��@��������/o��/)����6��x�
Y/h�F���F��k%xf~����S���;���K/��}�����u����={��=��-����5k��������m�FY�c�Z�^~�;��}�_M}�m�?I����.�k*�r��LOhs@�D��p�����
)�C�!��I����E vB
Y�f���s�@��=srr ))	6n��Z[z���k����=pdfffee���M�<y����$�F���6m�d�o-++�
����L}����[��J��i�<��,��4��#k
<��L���t�@<2��rb�_�(4�C��(2��R�L)����5S|����q,I5��PucGS�����u�������aa�1��%%���/�,��{h4�P;Q?U��I8�-�����{T�����I�p�uN{�#�wI��{F��������P�����Gt���?{g�D�����d2��$M��^���"�-��U����W����WV��Z����+��a�U.�r�e����-�-�m�4ms5�����v��i�6'�y>��|>y���N������R�G������
����w����TVV.Z�������������J��5�o�BP@(9�����v�#�>�~����U��32~�P|�}.;��>24M{��[]]m�Z	�HKKp8������j���g����o�����^�;w�P(d>�'M����l����)J�2======''g��n�w�Z��Ay&@Q&����uv����/���0�6�@�0x�����K�(3�2��Ke.ee|L_b�l�u�Ve�������_���'�2�A�rK��L�o&�|��G��f���5�����o�3������d#��F<�����(��P}������������2��	)[d�[Ot�����gT@��!��������&�i��%���'N$�������nO�������\��;A�l��m����_�2V+�/�:.�(����GZ?F��������m l+���H �����ocI\b����������`G-�}�inn�Z=DwRUWW���9l�0@rr2�����9@�����e�������k|>��9;w���K���W�^u����_�|yvv��o


L�h���8��Wae������2����	<n4��b0|'D������)�8E���c�%��cm%_(�����DOz������r@�i�R�'��g�c���>����_:E�#��S������Y����h�4�<�~q��|iD� ��D?{O��(�u�7�j�������&!�@���.\���76~aY~����j������*0�����J�J��%���&�U3iv����M���GF�u�,$h�5j ]ZX�m���<��V�[�?�D� ����Lj��`f���vE ��.H�SUUU\\������������0�s��"����c��{/����������������b����������{�(F(3�QP�����LH=�@ w�R�b�]&JJ
����:
h���,��]}��a��H2f��xf7���k�[����PJ��$8����0C�&�n��Z����O���7��`%M��\*�(G��	4e�?�Q�]�������F�s��|���32�:(�		������\�7����90�F D$���)9D$G�%�"�#L�rD��D�/��4:k�������:m��]��'[����\'���0X�����Yf�\kW�s���T��GE 

E��&|>?..PUUhm�
�IJJ��Nrr�����o��&??���AL�<Y�T������������f��������w
�LHH������@Q&���g�>�:��F��<J� �]L��BM�o�7��]U�syY~eH����'���a��HY����'9{3�+����r��O0C�NI��C��vS���������\��b�r\��Yuy��/�.m�����)�����$��!�����:[��QW8�5!�!�&�p���,��!����nj���8���4B�F$.�40�X;JL��LM���.5�4'���$�e ��,-�+����Q4.F�m�����_�?�����ms��;w.s�T�������IKK�>}����77������x������7J$�3f�����������f�����~���w�q;��� �^yO5���A�l���_1��;:�9�@���|�me�����P;�R��GY?��e�U��(u&��W����$���������$�3Jx~�9���|_��Y]����J�M/� �����)2""mV������)2u��-���$��V��,�03����K��jo���Oa�q��uA @<�HZ*�F����f�y+>qq�(�},��S*=�������?��$n���"x����8��������6���k`����F�r[��g���?!������S�G}t����W��i/�
�����+���L���]��u��2$h�)2,-�A����R��S�N�K�s�"����~��h{Q���yo��QNO?��c�9�
\�PN���7�6m�����7O.�`�����!�;@2"W������e��[N�X�S�#6�9�&L�����|�����T�
��^X����z�J>0������T�������vj=��BDrbfJF�;�����2��KL[���!��k�Jfz|
�8�@Dnn���{��t
������s���>2���}���L}���BV�A�����~�;fH�����|�
`��er�������CnW�\L�>�5�����������3��d��7�����4\.W�Tz�&
b�xhqk�#~���Gi0Fc��_�g)�9N�b���VJ����h�;"�E��<;&�
A����8�GFF��}���#� ���P
�+��e2��>�F<J����Q\[���>����o./��[L����u���%_2C$-����9�Se�����'n,��Y��efDD����?\�j#=7#�F$����c�c���|�KL�A�����u���D��l��u��m��|���>����6�4�����eM�[����Qsr@w����nj�5���#�G ���G ���C�9Q�EQ�E�DiK���~l��@��F�1s��imm-))�p8�G�f.��?�^����W�Z�e�����K�.�������;6::���3�a����5n�o����(
��m[�����O<q���o�����i��a�'ONMM��kWAAA�����q����������{`�L���w�N��:Dv�g����@��
�a��efg�	&�A��;;x�.n�^��\?}�"�	?�8l��cUs��ZvF�5=�2C\'�~(n�+���������������+I����\�z����$5�[��p�gE��\=%�'1�C��8Z�j����8Td ���0fr�K1��n�{{���|�9P*�����cimu4��������GgW?�P����������	P�8a���q�H�y,���e�4�G��@ C��M�:����8��w�}����"�e��Y�f��fW����j���7o�Dd��������.,,�?>�D�V��~����|�xW/�-[��6�z��/����&�T�f�����/+V���d��-{���w��>|�K/�L���+���2`��Z[[.T���6���41�GA�k���
W�9E(�D�!�54�0�����H�qqq���1�l4M���H$�F��a�����d
���q\�T�����tttyG������Ax���u5a��d[?����l�g#����)O��j���m���r�b��
6!-g��������D�"�������t�e���\L	p�/����\��j��{�~�&y�r2B���;��|?J0�m^m:|�u�4���J�n)�� A��>a�11�e�����-���;5���#=�{&pe���^�e� �EFu������h�<O ����>���u}=��������������7��@ C���a��c���0a�D"Q�TG�ihh�h|������Q�F�?����O_�pt�`�(����/����f<��/��S4�����#G�L�<933� ����#G��F��~�����3��F������E��������77783X��
(���Q3z���4��2d������N�P��T�&�����~N�9(gTTK�g��'%��{G~�j+y�w�+��E������S�������v�������"�JyD��M���^�a���8i�������7
|n*�U��N����������A`_X�L�(k�� ��#1i�����d����3Q�}�����>����#���/�����
�$��D��6��6�d�qB�h��>.��T$6� �J��7�c���I���(����:������v=em�[����t ���i���������������������o���JeZZ h�'CHgg���<�6o��z7�
���7@Q&,��Z��4����x&��@ ���^_UU5��@ `��UWW�W��wb�Y��K��q�.Q&�LP�T~�[�j�r�x�f�I�������- ))�d2
���� H^^��7(��+��������|A\�0>������4EZ5�;�c�l
�����
C�}����%�_"���U�������l������;P[[���
4�������%{�������l6_���	�g\&L�9`�e������`n�ht�t���Z�2��u�,�k��
�I!-��	d�b_�>�� f������O�\�
�����I�:6-i��6i��v���N��ci�:�����(ndffn��Q�����n�m��]��r�V���GC��6�2a�0��� IDAT(u2[C)[���"e H`�V��s����b��r���@����������]�r�^E�]Y�|l�9���	FH�1�z�0�c7��u'�u�L��&�E��	���0u����������J���,[:.��G��2��m<Y�:�V<$�`7����J�`��i��t���df������M7h���[t������x�l�`�_<7�gP�)g��9xU~��?�F�c��!W�0�'v@���T&���'�z�n -L���H[���w�����o��C �;v�x��W�R���G_x���'O������5k�����|����.����V�m6���j_�/@Q&@@|";�o�WYk���!t��%�������9��~B[Z���7�mY�Z��X*������MG_I���r�����W���p��I��1�*L6�����,���Z@h��K��n�Wg���8�qI�H�-q�WO����:�"	���]�e,j��}p��L���(�l�/]��.�vS��Q�T)�����LK������z���#C�����h�kC��[kC�u���a� m:�4{��X6T��0�2��f wuuuK�.��{��1c�v���������g�+������;���o���1c.]�������P�	=r�T���r�@ ������6��1���efg1K�'r���������$�],L������O0@����\� v4���Ta������T��X�b�4\���G�qQ�Y�����~w�#�����?��=.I�|E���������s2dX������T��f|�l	3�W�{��O����7(������z�"'���}D����%�US��)�p1�K0B�	D�&��
��������n���z7��n�}�������.�8t�PZZ�3�<��������[�n�={v���{��ek�������o��555���}!�E�����r����6�fXP�B�#�=�J���s���r�
���ln�eP��������������w�_���_ot#�Sd
����������:����*Lg�YG����2��'�Q���B����,Us�����|_��4/�����n���g��>���9=
	(t~�����������
�a���0jEy��M?LL�&8�@�������6�yU�M�(�F`x��1?x���p�W����P.BDr��!^A�x�����H2qx��f[M9�sAK�h�BwO�L^���8G'E�4�rT8bG1v�/\��w�����������=K�R���������4_~"T�B|�2�,(�@B.r�K�49h��2&�\kmH#���������k�Eo�N�33DQ��GA���������	�"����Ta�9:�k�Y���M�=�1�?-}���:L�
$����?��n��,WT������aMPH��1L���7�|:�9�
2R���Z~G��.��D��]�j�e*����02�%�[�F|T��c%�O�\a�V[��z�-�@8c��M��"����KPf�����\��x��'�$.���T��0�"�P6����J1�$e����@�h��C.�V��RQ$e�����������b%�>����BQ��F6r�a8�(b��
�m����l
�������h�Xl�Q���.��,NQO���~Q�x3"w1?z$3��f�Y3�`nt#(���Ta����7^�����Q�����qL'�����tzz��������rc@pL���1#r�����y�T�J����!���AU��G�	h����-��R;)�~�������/��0v^)_0����sU��.NZi��amu�M���>�q	\��R��a�m�.�t.<0L@�������0�y	��3������I@1qD��c��8L����nKE;hMG'��"��+����nY�B1�4
�nK+�%��n�����ds� �@E�P���@��^�@ �����2�6
�v=}	Pf�xH��2.R@������Zs�+���nGn*� >��W��7�Ekn)35G�m�?}e�T�y���X<#��R�U������������z6G�&�O���@��k-�.7�D���N�@��~�d��)*�t��}����V������6�f�����Oc�s������)������ (`��pi�^�����#\���GP�K-�%�FT���n���.�i��gE�(��l��u$����%l89ssa��WD�s��K�,n:
P
���j7��~V�~?z��n���n�x��N&.��������-���bSc�����. ��gm�Cy<e:Q�`N�O^63#&���8����6R��������?��Z����+�M}\	4*2,:?������;�?W�F������#8
v>)bq�h�������
�4l��@=�P.�K����O�}�4�3��r���h��(#��8���^y�d9u�O��K�"v���(*P_H ��E����W�z	��'\��|~Bh�1R�%���R���Z�\���#/G_H(����(����Fq��$���L������l�:�9��Cs�j���SBn3#���:l����z7\�p���/$�L�I�v���{��e����4�b%s���E�3��d��X�`��i�����A�.yC�u@}�v���������c'y������m=_���2��=�F[[�~�R]|���]0�K�A	�"����Fe%��@�
�A���Ys5 ��I�%��Z%P���d�J�\!�n�[�V�@�I���4��V�=�|���@�(���*�
���\B����m |���@M�{��A��f�$iC������*�PvS��U��=�65�i/W -:SS	#�����7�����l��W��,�r��3G��MJ���f	M:#Cy�b_��~Az���,��o�l3]���H���\��N5�C���;����qU[�bqR�$b�^��Z[���;���~�bo9^�PF����M��q��X%�x�vE��T�s#����b��	i�Ik������F���8��5*7w��7��/�h�[3.��G�����f�F�)�6]+c�
F��������8��+$���Lh`�dE���!�-�@�*B��q$�D����/X(k��.�Hc����=�����DoE�H������x�t������V��7�����U��<�^w%F,�����5�ZG!7!�)�����u�Go�����3	>\a�&;�N�8�@0�W&_"ehC]G���Q�1C����jm�������lV���U.�ggED����-��Uq(���v�������|� ����K~(.��A.�nx�"8"��{��D�U����1oyhh�@e�6X�@�����k"�M��2s�S��g
�3�A�:M�Kl,�Es���U��<���E��Q���F�j�������"���g��{
Ep�y���t����"�{��0�u�76I�8�t���Y��*�:��|�/������]�|Y���,�T�?2�O#c���!'7����S������z��]%��9��"����u}�Fr.~&���S�q�H`���B@�a2n6x<��1�5�j���C��~�K�d��y������������I
���/m��^OE�A�?r��W�Ki���/��K<1�?����c������U��������v�#P��B�nSK������|��di��Q�;T�YI�R}Z1���J�+��O4X{D�(���
�����G:2(<�/P��@DX�2�7o�=�}�vW����/�����������p��H����������i $@��������O���/��*6���RJS~)��_]�_�:a}
�Oa����?e�Vy��b� �u��Vw�r���5aR
�\�(��(��M2�)���L_4{�Ul�����kM������]'1�?6����x\���1{��'�A w,a-�����z��'&����<y��?��r�J�L���VPP�e����{�uol�F����K��S��1qc�6�3�d��GR�crA<���Rs�/p�<�?�$�c��������r�@��UR�]F��y0�������'������G6��]$����^�V�
��R	v�/��,���2g�'E������
�t�a����f{�O�8��y9�#~*� 2(�]��Z�r�<�'+V�`�|��m��r��/����=a����n��I�&yo|���@ �������5�%��8�'�l�����~���\�b����Pi	R#V����Y����i���Op��/_����9��t����,�F��z����{]O4��OT/8]��N��0P��=��(s������a����#Xf���l�h�,,����u��QN�sr��\,"T�A �K���/��KHH(//'�>�-Z���t����_��i��3g6l��i���k��<y�K�p��oE���A H� ����'?*�i���Fu�Q��.����C����%�f�����5���^�R�������������h���w��j�
��T����Nrx�n�R-��@W�x��Xld28�s������������������26qiO�(
�����{C�c��X4L�*�8_�d����
���D��	��~�a���(8��������<%�J"�(8��0E��Jr�\�R�d��C >>>T" ����eRSS�~�z?6�;v�.}Lw���i��y���(JQ�76{��a2$L���h����O�x�&7�/��H��&�P�,4M��]Fp"��S�>�qUd�5���f{������y������E��v�u�����3�${�LI����v���d����}��NQ������R����������/�{�f�y�RxI��������#����?*���������W?�x<M������f��l� ���*v[FG�+	��/�#��-@ w�+�0e���y��Y�f)�����c��;���g��'N��X___SS���&��5�76�6�^�@ � �|����)o�jf�K���oY>�����������x�dQ�d�<_�r������4�������k��ju���amw��0����XO�O�*�M�aat�&r�U�:[l�4���4i�q������v�1�o�;�?��M��������y@�����hd��Cr1��k1���w������e����b�P�t���[�'b�Bf$�y�����z��0�S[[[{�#��k'�D?{O��l�)���rr���_v���u;8EQ�����B�hoo7�Q�<**
`6���������+�Di��!����\��C /	_Qf��a���~���33.��a��-[�z�)��o\\@�v��V����bcc5�76�~.n0WA�~SG�%13SU�1��Z~�� {�@ \�Km4@mwF���^���Y���z4��u�v�}���P�8A�$q�d���
��<��/��|d��S�E:���h�j�V�U��h��-�U38�F��%>s�p7oCF (������c.��\���h���`A	�1��(s�K����+L��a�3���c��_<�l4�#Id��i���X)�MY{�<��������!e���E�le���gQ�^3�^��(w����mE������f����b���!��?&���-`f�8����\Q�KD�&X�1S���o��4�(k~��plb�V	/��r��>wE�Jy�F@)�@���e�H��W�Zu��A�$g����{��\������_D�\:::��en;������hs���~���J���~l�r9 !!A$y�)e,[,'S��F'�M��D�#
��� �d��928!�|>A���� ��<5�&N����$:::8����T*���P�J���h����������� ��a�3*�c����F:?3ec�q��m�O'-����.2"(����&������
B##�F������9=���}�.N*��"b����V���b������qX�k�����l=����v��6�k]���8�*#1���G9%���(�E�)��cQ.��0��`|�#DQ�Q���/����dlc�q/�E�`�~�Z	�����w��*M��N��p���lh��� ��0Qc�����S2����p9r�k{1���O�_U~YLMu��I�b����n.z���2�����kcF��9'�Hw[���+V2�t�&���z�@ ��e��;g4w��y��yf��/����<s��s�=��G���0�~�e&�1""E�mzo=m�����x���������+$	s�?4"�:��3t��c�D�D������nOP
�����8A0����z�B�P(�y"�Aa�/wtKm���N�]������A���~��)��j���jw��x�����j��������I���&q]}p��#-���nk#�����{a������RL�(�EpQ���*�p������"S|�	�	�w ��wY�+�P�@Ln��*�5<����k	p��?������M����H��3�2(�3�������.���_h6���w����x(��|��l�B� ������[{O�={���h��)yyy����h4���R��-���U��$Ih�{����?~�9~���i��|���'$$(
�Z�R�4(�s��az���
`��\.W&���=}zP����8^]]�����$	� :�.{�����L�$KKK�� **J��'�(:::&&F�����a;�bbb�t#1==](�������B��(�G�Y��%y�3L��@���D���=�����q�G�?td����p)�K1�0��1o
1�*|�?L@]r@�(M��;��+�t2rLM%����f9�w�?m�����S�P��W6�����x��I�NL����i�@ �p |E��(++�2eJvv��}�ZZZ"##e2��%�I��#�����lV�T�����B�B��h�1>�9�B�����7����,��RSS���/dff�8^[[����Qqqq�544a/�L���IQT�^L���z}�B$&&F�����8��x�^���x�P���X[[;�����A���I�VZ�pV�R�;E}��-M��j�E�=V{M�E,�-X�����
��R~�s��W]�,96���1',��Be��z�)nb7PLe�J�6��5O�
$i��,�\/�e�_��4��(�`=��� ��d+�B��_#��YX��0�e������<��hmme������������)c���'J�1�;���.�4}Y�9�
�������h�a#����[-3��b2v>Y�T)��T������@ �0ea��0%�]a��1U�U*UnnnFFFMM
k��8S���*��M���a����K�/��@ }�s���h��)���i�8�0�c���e����n�|�Y�e6/�Q�/V��J�g�!:�b������<}Lq|F���n$����A��
�#IZ(�F�
SAV�K�[�&�a�(���HM�(��b��:�e�H�0����F����|���������J�V��<C����(��2�1 ��I�c{���B���9\�����m����v��[��� (���fz��cF,'$��k}}}������a�8�!���
�(cj,�W�d�a����l.z���!�:�Q[�U6���E��vR�M��y�J�Yq��8�BB ��T����jnnnkk���s�x� H~~>��4�����3|��C��6����H$�O�f"��	>�2�����s\4�Wl�@
M8+�HmT����/W��83�����x`�����F�W�Y�����w^m������<&[,��k��)���[4�h��}��4��A��)9�'"9<%������QWh����O��C�F3b�[p��@R����H+c�Na�Y��J���J� yy�H IDAT�+ULw����L�:5..���x�W��&�4�Gh��������qh����#+>��o��J��+}S�OY�������
X�NQ����\i:�
+��� z������v���V�C�����c������\%�t���v��P��@ w+a*�����������[�n�����#�<���w��%F���m�;����c�����L�� ���|��g�)�����^Y���/���@ �A0�p^�K�tUw��<O�sAh
t\2�
P'��N��o�0_:{���J�s��P����c�>���tyk���.���]
������.�r�U�94�"@��x�~��U����W�c��@�]J�S��9�l��&�e�:������b�Q�������c��B�A�����m2�hvV��y0�\i��WU���B� �=!����������z��O?]�`��?������E=���LZSkk����g���N�<��������kkk���;fol��Tq��%�>CG^��	\LF9{��+�ve�1T[z��3e��c�C���r5o}��K&����������QW���{�l�����Zx�	
[�0�^c~\����vB$�d��vZ�-��-�#�%�����=F�����M&�h�e��/k~��(?�G��yv�ObbX}�@ ���0��9rd��%�|�����W�^�L��qc���g��e���_�P(/^�{�nf���r��E�M���	&�������M�@ � �$P.~]k��]������4�U�7�����^._�����8�n6��/�=_:����]�wk�����WJN����U�!�Pq+���y����l{a�t����+(��Y�PsL�6�F����&Gg0Z��\l|A�?<!��YAY)_0�����4�C���U��(��k��f�����m�X.^�XRRb����k2��,Y���?q�D� ���>�Z��K�`"������
]q�<�@ �(�u(�QjG+�`�T��2%��Yf�x���
MoNOZ!�a�C��f#��VG�X��U
!�>��[]��J��K�8��V��D]Es�X������K�A���	��~�S�
(��c{�G�%������1�}���Jf�������@DX�2��RXXXXX��
M�.\�p���6A���&E��IX���B�����F3�{�|�f����r�
�xI�G�?������~)[�\�t��C/�ah������
���O����q����+���d�;e�yB��3���N}�}����,���\��f��qI�g��$��Q�Ig�~���B� �n |k�������:G	,(�@B!�g�H�T��
������N:���OZ?�PQ�Uv���Mu��O����#����@ �5L�;�U�*jN��Uc�c�A��)P{�=F���	k��o���n_��F�~�I=-}�}��rPa��@ ��(�7�!vfd�H(�@ w�7o�=�}�vW����/�����������p������_�b�c���H��������J������z5�����O���4�e�f���9E7F\x?���)LU`��P�h��Gd_�}�������t�k<ZB�fh�� ��99Qsr\�B�[���Ps�X[��&�����l���������qP��HF���9W���!��@�t�=}��C�0�r���m*{K���@ A"==���i��������;y����w��r�F���VPPPPP����~���9���c��R�54�q1&Nwf
���������p�G��_G>��e��.j����������w{�ju����	���a���^V������e��(��H��Gx��{ax��&7{5��G�c��DOX�t���l�*�f��������<vVD���<Q������(�B� ���2�F�8�=&H�Q}�cr����n�Z�r9Izn,����m�&��_~��7�|�$����=zt��u��o?y�d��r�'*[PF6�gA�pe��0=���'����C{�oXkC�$L�����aY��,"=�7,�7,�H����g�
�,3L�[����A�e7�k�Lf�Q���z���Q���r�@�}#c�42�O���A8�1������]��T��C����L���a���������������P^^��"X�hQRR���'_�u&����36l��i���k'�`�Q�juh@����:�5<
�����A8g�x�4��M	��[�����;t�XLj��!w"���M��ed��3y�2�a���h< �	�e�@c�����3�8�c<i���U�_�K~��W�^i����UL���
��y�/]l��
��)k�@�x�(l���]��Q[/A w����\�~������;\��v���i��y���(JQ�1{(��-�V���o{Ig@���+2�lM	�8����rb�^�Y��h?f8�Owx�v?uj wRD� �a�����^���-6�v��J#���$��'N	QF_u�XwR�<�FOzF}��wX������kyy	sm��������I�������B� ���2Ar);�����U~!����Luu�#�<2k�,�RYZZz���c���6����'N��X___SS�����	�o���Hm����QI����-�����n���1v�X�J��2�����Kg?$�3At/<TPVp�Ke��X(�I��}��;��n�����
�����\^v?+�HN�&�#�D ~�s������+�,5��km����RE���B!��}�B���+0�/By�S�'��K�N���i����/>���I����t�nu�vw��@ �;(��|��p^D�����
�@nw�
x����|>3�p��
6l������bj�����j���j�:---666@��9����q��F����bX�2���\Qf�x���(N���KefI noKJ��L�)��~����{�����2��a�~��Z�7�H��ge�Xwi��a�m7�7���k��5��r����2i�h�&����O�2E�������=����������`�����q�0z�3�g�NZ����&������'�����q�S���i��s���0��@|�2AE�\�:44���L�H��W�Zu��A�$g����{��\������_D�e:::��moo���^��#�����Dv�����c�y����>��NI"�J���M|(�
��!LK�b�R��
bcc�y)�|����vf��F�GD8$-����c��9����s�KW�v���Nff�?��������`n��=����q�mW�[�������kl�5�G�T����oqUjn�-\u�hV��VN;�8��D�����T*�_�)))UUUC>�@(�U*w�����K,�c�����s
���o��� �L,���+�%}�AE�|�by�x���+���~N�@ H?@Q&���&��1&���|��@ A���sF�q�����w��������g��y���>�����&��j���k��n�\.7-�yY��Cj�Kc�K���N�q�����-�������DTZ"*E�H�5!�sD^��TK�Gc�C4�0n�a�����WKD�Z�>�>����x[L�M��L���[c�9B{HC��N�����x�-��W�����+b���!�h���_���v��Y�p/3���\�����`���/���&�|%��N��)���_WTr�
�v d�@Q&����V�:�EX��;��uk���g�M�2%//o��}�&22R*���0rL������#G����s����w@O�|���3����i�����p>*�����);�z+W�T������HJJJGG�N��e��9r�Z�>w��/�$�	�D�N�$�G���c�c����k�����-��H�������JKKo����$�B��^�;F����L6���q�S���Df67#O���P�/[�i{�������V���klu�5Wk�O"���7n��`����7��=����2����PyN�C��(����5_���%������a�9E������1T�
�����r���'�U�;�hv����S7�����|z[�O�)���@Q&���V�I�;��`?l�����l��)������kii�����dn��\.�T������F��
��-��F#M�\��5�@j�t�>�t�t��l�{��<`�ZM&�7��gD����������w��������s���I{[�M�M��_�������C{u�~4���>�hq8�������D� ��T�a�J�����O���)�70� �5��r��2KW!�R�5k�m���d��������#�JE��b2������a�:(�DI�:�/��b���� ~,3�����s�P�`�y��v�5��l861e�����G�G>�}��~#�+��e���e���,bg����
km���@ AA����*�hnn�T����������������(34���a������JY27��*�������:�|��C�������1/�7��q*��6j���U����h����A-������:�_ry�<��x��iD]n�^f�`��^5_��0����9��	%��kU�o��]�1G�������u)h�u�\xm�������������x�n,Ed��0X�@��2�#2q�R�����x@ wQQQ���mmmqqqL�A�*����������9��<t�k3~�x�Dr��i�> W�\Q���c1��u6�i/�s�.�I�'��N����Z���t�R�����(�P�G�E0}�I�<�Y�Wwh�v�uKu��-\����eQ���G�����j�������|��RQkm���!a�R��Gu}�ZZB�k�}oj,�w�������s�P��������5�
'�|����y�4�oyY�g��e ��(<�	��c��kU'B�	---%%%yyy������Yq��G����t�#�l����w�y����~�m&vA�u��>�����B�s(`�������(��k����7+q���~;R�5G<M�y���!�D����qo��Tc���;�]����?��
�k#�\~v/+�H�(Zy���W[o�X�j�����2s�uK������W

�����������_~y��	2���������n�J��`m|�A8�;q��9�hB��E��?�5����EG��Z}���:dT��{�r�%�#E��/�P���@ �P�������6p9&i\&�eddx4�'��^����"u_���a����0(�$�����-`A��x��M}F P
��ID|||ttt�R(�X�g� � A{%�wHLL�{(��
���~��O���C����[������8�O�>��'��(��'�d��Z[[?���U�V�����7�hoo��'/^\[[��w��1\�|J��oG���������?4��Bp���	��(��!���e���X��iD2���q����;tP�@���'���Kc��ryY9��,^����d�-�^�2���`nZ�aHip6l�������{?4y����w��r�F���VPPPPP����~��A���p�.M�p)�z}��|���
?f3������O)�9�^�=y�q�������}D��`�(���n��t�f@K��l6�e���#��s�����^�/��r��8����:���i���d(�z����D"�4�� �Ng6��Z���<;�D"Q�^IF�0C�A�����#K�,���OV�^�z�jf�����/?{�,k�~�z�B�x����w33����-���8�����F�"���������2�c�,G�?������\~�R�����y�Ql+W"9���G�+1S�S�s�t��w�m��f��pN7>�HI�:��I���0Ux��/L!���!��{���1������1c�����7�|��m��r��/����o�$9~���G��[�n���'O����G���"�����(�V=��o�.���J}���:n�}�l865�+9o�G��i�6���v�����[!�(�I���%���a�X�0Fl��������&���P(����D�E/_
�������h��v111F�1@%B�`����g�D���dD����!���r6��v�:p���3���-���KJJ,����dZ�dI~~���	�(//?|��k��	�����99��m����b��4��e��W��I�&��L{(b�l�\��;���fJ���L}?��r���:���*1]�&6��o �J�:�T�e��������N��>-�i{����?������5��[�g"9��[�lY�l���e�h������'O����L.��3g6l��i���k�2��76>��FZp�(CZ���^r��l���*?z$3������gw[�C���g��9I�M�-M�-��*��Hm<�b�Qg.����N�(�	�,��s~�A[k}Qd ����b),,,,,�����.\�p!8.�|)@���qs��_j
�w$u����/7k�������#f/����=�(9��?�������5���=|X����F�l����V�Z��b*,U��]�r��%��P*�,�mPPP��c�ku�;vl��i��y(�R����~��%6�5�����t��7S�53�����_���8�N���W�xl�����_�{wE�������{���F"DY��,"���\DF��z�+^����x��;��y�+�2�����2�(� ��n�aQ� H���;�w�����I���z�����TW��s��9�~���x����oZz�j�oi������o#@bAPF"�EW?l>�e�Z�k���
���|c�~���z���M:7��= ������\}N����Q�2nY�q�Cpn��rC��w:76y[����d���g�-Q����*�q��_�?7�q��<X�>&ND���7I���XO=���QQQ6(SXXHD�}�Y������������Id")3�vj�������ak��f��5Y�����a�|��_�B��1��1�#"���b��l������g�_���`BPF"�����YN��mkp�Zv��B�u��6^v��"&S���\m��j�ef���j7���g����)7���l|��I�|���G���\"jn�n���\RR���c��")�)��I�,�^OD����43wBpx��k "3���tZa��j��\ZZ:�#���'�m�1�|��=^���L&#�Q�FY,�!�9����h4���B����\���������������W����c���J\�@L#kf����p�*�����|[��JED��������I�� � (#s�����N�������r�^,#�u��������/��tE��I`]mrP���L}z��lj��CKJ��Q\����y�}��p�N��E�W5���C=r�8����y�����(''���j�2���������'�����H�������i���{������8�T1-jAQ~~��_�5s���r�G��{�
G96�p�s�M������ T!��k,9��_;���d��d����	�j`��L�����,4+_^^^�C���j�Z���k�����ED��Y�Ve�A����}��������S���@D&�Z}mk���i_�����g.4-��x�(e�/����;?*{k��{�������j��j�����]�[|�Dd2�233����|pHv<��c+��A��L�H�������w�u���ml<���h�h4v��n��[{��=B�����k����Mo��dj�z�K���o4����O�~����7l���,��Z[[��>�&|��X�\"R(���� �<��C���
�@���OU�&�l�O�<���u�%"?���Uyd{��>�!--��r��R����t:���pH�\��iii����ED���
����#d���[
"���
S�YY�6�_LD�im����N#��W���E+Q������tm���{���k����� IDAT�_��6�����Q�s��:�f���K@�����l�����2�d2Qsss$e�wvvv���������/�������c{��Yxm��U��u�:�����y���VF���5j��}�9�q�����1����%U��%����}�u�)����Jq<Q����3����%�+##c������O?=��T%9�yV��,��4e�%eB�V��z�����W��-��O�~������'N�8n����������"�J�*//��{�u���fsUU��'�v���"e�A)X��!:�!����l
iT�~5o���2�F\W{e����W/[h\p�~��?�I�N�U�>R�:5����p��H�_��v!�455edd�������	���!�2������0z�����<�	��W����Wt�+�C�� }�-������aw�r?��<i�����y���F�@�4�
��3/��I.oS���f���������6$e�����V�m#���J�U����Z_"n���������|���g[^��j���~����]\�Z��[���B�0|


���+++���	�T(EEEt:�I�a����i^�����!C��{u��U�e���K��u��a�������kj���F����e�����0k's���R+���W������� � (s7Oe�_kFw�w�w��5��������e�I(Gd�h�������10r�������[�`��M��'M�>�`0l��UL�I�a�������[;���?Z��%���Th����;^�o������(d�������2u3�������W��������������f�WXi��h�H�_L�0"�|���,����e"���|���^�_QY.�7�f���J;#���'�,nt�u""�P���x7F���W�|�o�1���q��/'�^x!�2��W�.�$��77_����m���r�<���4:������w��b���o��|xp���_t�������y�U7c\��/-���I]W�WV��C��z�\����?�5uDd�Lv��1���aC0X���J�����C�*n;����������_����c���j��iiiy�����������X�������n[�x��c���Yy�a��d�m9o�)CD��m���
��T@Jem9���]���U��'�4��4��P�����+��s�2u3���J���Wq����j�N�yc�.��f���W����:)�����*��U,�����`P�2pV�i��m��uN%�t�!��r������/^�x��u�����E��I�!����wd�L�ID�������[��_������K~����LH�@R�c���������Vr���kM��m����D%���U'3��5��-�XFDvwM�����������6F:eb(�0��S;��O������*����Yzc��s�^���n��m>������s����bcQB?�A���Q��:����dB���������cx��%K�N�:c��J�o��?�����m�!S��{T�mc�6J��2D����/V\���P�^j�xC����m���������W\�)��v�~�NY4�u����$��D��6�#h��[$XiF8e�@,3�,$u�\W����N��X%�D��Ga�
��O�]������\M^w����u�|�f����j�	�A�����` m��������;;���s`y��u�;�m�2�1VYYYYY9�2C���y�>	i�D�jN���v�5{�/��"�a�������&���_i����dEN���B��D��5��lW��w�A�X`��q
�m�����|Z�2� |�zo��e�����V����n���|n�������lO�,6=���0UO)/��g �����K�[��������C�+~�5��+��/��Y���
�J��J/3�_���?���.d��t�4�nF�a� +m�3+m���j���9V��hI����E�����)S��t�C�}������oZZz��x��������/\�v��%KKJJz���.��l6o����7�x�����X��e���(Se�N�
qEe��?5�a>�s
}K������o�^�c~����x�7�z���v��i��d�z�����0�����?�O)�o@wp`�F�6zX���������'h9�E�]�zb��%A7e�|){����SA����n������1��6�un������2}F���������i��e�Xi���U��s�?
������A��+W>��D����{��q��M�0���n�9s�����5���QWWWH���p�g���n�:��b��Z[[����+��r��)��{o�Z�v,�Q��vg���W���F�U����2��L�*]������I����j4G�X�����:�cM
���~���������.�p���` ������G��	�AB��1�}��|!8Pe�b��&U���,=�S�2�+�C�������[H.�+��ty�����'��z��>����J#0�_�������}	�2��5�m�G8�k'"g�o� KZq �1r�=�A�]'��4}���k�$>T[����o��F|[}y��
]7t}LDr>���l������������^%��m�� "�|���N�	���k�����7(s��>��� �z������ :��O��m������WTTx�^"*--u�����?�S���^��b�<��C�<�����>}�'�|�|����zk��-�h�����I����}�m?���q�k��� a�8����+9U��Q�3�*�\�!We*�ru�\��(UM*�H.�N���������4D����*��5��a��L���@��Nv��sH}���� T��f`�%g=�.@1��_p�|�K���s�Z*_���
C��0w�C�Uk0X&�V�������f�y�W������{U�+m��������7(s����SO=�z��d�v���;��5k����;����J�Z����o���"2D�h�����-[�<����h�m��=���O>��=������*nd\�w����SV?���i��1�n;f�q�����b�Yis6x�
^"g�geZ^��P����
U�\���
���ye��� �U�AI�����&"G}�|��� --3�1z�.]�ko�]1����U�����P�y�y�������*�P���'��A��2�{��L������?��k$l>$��
��;���������g��-�G��<yreeeqq1�q�8W^y%�]�6x~���k�|�������yA���9����U7|(n�DUBtT�	���������UUU�i��!����G�?���U����(���]�y���
H��}3���<�>���<A��d�@D��)	D����k��J��%�b����e�z@a��\����o��I0�J���[�9��8����&��1\���`�7(��;�|��W[�n
����IDt:������.]:�|��ZUU�y����7��g�}|������������fbw�YSOo�9�_���9�zj=i�a�%N�O���� ~��r2N��ggg�d�����~�����u4<������+����$�p:cZ����������H�d���F��h��dr8O�W_NN�������!�:#���h4���������H.vZ���+�+9�z�L�K-jm!z;q���ac�$���N��[�a=�`�������+�j2���]�&����5vwMM��4���[�_�z����\��y������9s���_�t:���"=z4��'?�hN�����k|�����ow�}���777����C�Z777�������4(���&����5yO�������A������]~������v���0���25k�z��������.)���7n��sjkk�n�#Au
�b��	;w��?X�"����!�b�.���.0�L{��9y��)�!,��(�!S�z�;���IPJr
�R�e�����d���w��%����mDQ�J�jEV����Zi��|����uK�W-�-Xi`J��L_�/��������+�����H��w���������?��S��z���'��_q'e���C����FD999{��e�d2�=�j��<�J%6���7��X���5�\CD�1��O0:5P�����TH>�����4�+AD�q�B�(.����o)--����;��-���C ��8��4iR���K��������������)�s�9'����J%����z���qJ�2//O���H�����r��R�8[f����d�}��4�3�1����
L_���S�����D�D�~E�����S�W�js&9J�w�
����4�����.�0/�0���W���~#a� n�#(S\\�r�������V�Z��#�����������oo��]�����VWWo�������������A4nwh�Y�<b2�I�u��W�7N�.((hll��t6U�T�_�������.n�uu��q�k���~�Q��R������
��C`z"(D��Hg��(��Q.�G�>�B1x��Q�T*����DD!�5����iii=��i��3��[�LD>������4�!��{?�*8%�H"j������-O;�bG�R�����$��������G}���R[�ed2�}������T�����?���O����k}_���_���g��2e����k��222�Fc�`1�wZ����������t�����;v��1c���������������������~����r<��hT���iv?�=;�Oj�:;;���cg��!�1c�R���sggg������<���"A]:�n��i>�/$�t��l6qz]����455���O���ryqq��C�$����L�b4���'�#"�4�H��V�58����f��^��o#"g��@bq){'�r�."r�$_PF��4my"����C�����L�A|[������<��^i[M�c3��f��H�r�J�))��2�e��
]tQkk�}�����/z�
X��w����������655edd�������b!�������z[[[.FD�F�"����@����<�ncc��E&Uno�[�\SC]G$'"--M��E���okk�&P���L&����f31�$��f��f�I�S&==��\.�4g�P(233%��b����9�GlD����%"�G��v��@��I����W�����;)�5�L�����f^W�����ShG��J��� ��mM��m��������0�#>�J�-=_u8�J�|���
�h41"�m����'N���9���dD����`"���H�����0n������"R(EEEAPf8
s��v��l�������@g���=)���iz3^�����N�@���g�
�~M�+	���OO�W�����|������`;�����;�{��'"�����|�j���Xi �$�Z���-������_?g���"����z����!yU8��:u*UUU����OD,.3}�t���u����V�d�
lt�����/4ir{�m~�&x�o@���A�G�2��	E���B^c]2&�	h��'_O����+�QN������uU%k�����������}�Y���+F�����%�5^R�a\��3u3y��s���_F�`�7(s�]w�< f�����i��������/��t��)S����[��^�����x�����(p'���^��)����
��n��7����G��d ^TC`��e�i��� ��T���M�I��R0�co���x����{�D�������{����/��������@�w�+mO�{��1_.��vi�Gs�c�'���������.�<�s��$��%�N7f�"����K��~��k������i��G}�������S(s������A�����iM---/���w��~��+V����v�m�/>v���5kbzC��.���Q�:L������L��A=�>C90R�A5*8+��|��#�G�Q��?e���<h)a��J�V�Lk��L�,���U�+m�g�c�8����������S�� n4(#&|!"���������d�����/��-[�l����#G~��~�������{ozz������['����^�hQOOO�NA&���z������w�L�q�����
��w���+8�=��y2�����U����z���]!��r�J�Z�vJ$+m�xM�J�����<���������7Oe�w J �
�TUUq\D}������7^z��c��u�\�v���s���
.�p8�,Y2u��3f�T�}��}�������K��^�#������,��@|�T����9�\�>��~	�Z��<��#t�u%o���M]�v��F��/
p�Opm�m�v�E;ul�=Ddw��T%�B%hP���\���{�����c������R�V�/���!��M�&����
-v:3:�V����=D$��H�p��j��j5�6��BD�&'����i�V�h4Dd�X�9;�R�V����D�����J��t:�F�V�da����B���LKK�P*��zGL���C�K�D�4`Z���lh��u���:�wH�$H
!+m�5�e�fe�/��f��6���`���R!(��r��y�������+�K�Hm~���vGX�	z�,������\
gq��	���zcq���Tb� HS���vK���|>�4g�q�J���J�����F�F#���z�D�Nu���x�.����D�>}i2��z����v6|g�������s��+�c�6��6�����Yi�A�X�X
�0$�t:-��j��q�,(&�T-D�z����������������9l]D�v�#���q\^^^SS�4_����]]]����dR*������b��%�H�R�\���&	���dD����s1�Ng�Z�L��7G���cyZ�x�L��_kr�u��<������v�����AT��5�b�D�Q�f�fXu33u3,�)D���`� (+>��@���g�d����Y���?@��){S���n��;r��\e���<a���d���l���n;�FA�szO����m_CDjE������?x�}������5������$�J�Vp��N(�0������?����wWfL[�+Gh�!���#2 ���	MZA��(�@�|oPF)�c9�1��K)l�����A���8�����J��M�������r�q�5�/3�\J}�!��M-�{.�c�{�h����v����
�e$��LL��^�8j���������H��.hbr�N}��aI����qJ�R�|4����G��m�Svx��H^;
�BLn���'��`�C��Z�0��&�`�X�0��J�6fL[�1m��v����m�^�;I&��c���B�����%�N�K�PHS�����d�T'.����T�����sM&�7�|S__�?�(��@���������GD���L�m�G�0�;�=#'S��V�-..W���h$A����h/hZ������X4��8�����cq�����EEE����j{�>�(X�q���r��y^�16����.�_�������7��=��tzu�X�h4D$�����EEE������AD&�I����.�ZMD����ahG��������A!(�81(��i_i��k�Dq?�1��2}`��e��$������9x��w�}7hI�Zm������D>]C���'ww��az�~��Q�������=z�L&������N��d�y������L
}�Wp���������g���������"���{�n	�#�q�����J�p���322Z[[�9;�JU^^.�����V��G�=q�������!Mt � �oLp���=2�nq����85��\�6z�N};��@�Qpy���ZW�����v����z��?W4o���&��|�I
A��pt�m�p*������AP���{���D�_H0*�	~��~��2�S_9�w�xw��G�j������nT�6oj���&��d���t�3�o$�L_�>����{/�LY'n��DP�_���@���������u,9F�	�������jM{��q�����B� IDAT������\RF�2���]-�}G�9��A$��%8[)}�:��9�������������(�^Eo�9��qAH,2��N��_�����g�t�h�b��'F�����5�f���Z������~[��
���s<~�H,�����Q�G_:GFDm�X��K����"��%H4��w	�G�������|�u�����^m{��|��/�[���
#e�/7���v�Kh,����wa�$���,���t�}=@baA)���~w3�.
"4���o�S��z���2��I/l����
/��t��E�f��2+�m&	�/W��>��]ziN�^�I>��%�F���� ���Ah������>hI��BDV��o��4yz����F�G����</������A��P(����8++K���r9�F	��h4D���dS�T	�233��d2Isv�q'���j�D������-VzzzT[��\f�C_Z#��@�)��m��0R�,��=_Uz�j�Ma��O�a���Z�����m�����x4`�BPf.�������f��z��b�ttt����!����:�w�n�n�4Mnnn$
����2�\�����f(l�Y�V�����|��2�����1�.fqqqsssOO�u�����,��.�������R��d^^�V�mnn����"�����~�6�:#(	������`NWz�!��O���s�{�6���L��<~	����Q[��.4w���}�����}�b�`R@Pf� �\.�c��0>���|��U�;1���QKew$G8+�1��~��0��(�K1|^�W&�IS�J�"")/� n�[�+ID>�O���UR^I"�x<C�����
����Xj�'1}	�K���J�;0Rf�N�>�����s2%�H9e�,S?W�]����!������B�Y�:�24c�$�e�'*N���L� 0@BAP&����
�lc7��o$��I�����a�p�a�$��������n$��6�H��eCi`;�����t�p
�0������Fk�t��gH4r2��{G�x���N�������Q��n���_�>�����"���2A�;/ ��&�]���{G��=���!p���������	U�y:O4~�r���u~���o���j��������e?��<�z�����+p���L�u���m��IDm���"�U�m������%H8zEq��N:���8L�:����n�����uw:N��[��e���Jn|s���.T�^����� .00e��E�l��'��uT!�$�\�V	d���U�?l���WKF,�������������X>�>��|�:��	����&u+z���3�:���!�M�Q��6���a@R)3�
�����������~�/"�\�~�3���t���F��M~�1zs��]���R���e�#e ������#4b��x�0����uD��"��2����N�5x��xg;�@�q�2K�G��@����v�O�"q�C�\�d���]�^����}�x�y!{��`����
g�3�U�sx�Z��$���&�:+��c��1�n$��"��A�,�j�/@b�����#e�]�tU S���Z��/m���_R�������l�;�}����|(n����6�������2�����&��������c{B��EB�*�n�IO�)H8��w$�J��Z0R&A���m�>��������I7�*c$/��j^����DRX�:B�5�S�E�&?L�
��;��HX�DSkZ��&��m�.@b���X���;������L��n�2��Q_y���z�]�88���
��:+J����������@�!(5<�nN�
�����@b�0�
����������L�3����=������9����3&L�n�";�n9�L�DM����������mH(	��V�rz[.�����l
@?z���m�3��	��nD$�����a�Lc�i������g<�k,�\��LS*���?xQ"�w�0�P�>f��A�����?�R�����BD[
���A����_�����Q�n��
Cm ���r[���:�p��e��H����tg�Z��\�"r6������4f��<pG��������q�8��G*��t��A��,�D��d���r����I��LII�C=t�E������������cRsU�������L���@��
�Fk�|�&n�=��$��@�Qq�M�����qlDh��edd��?��r��_���S�:�]'.6n����Z��IG�1����Dp�nU�y�8�*��xf�����Y�15������5k��u��f����^y��W^y��)S���^�[�t�%�muzv��%��1e��,nT�m�e��Q�(?�@����rM9��|��p�����!��(�182�Y���d�X7���M
�s6cp�������`	%wPF���^��b�<��C�<�����>}�'�|�|����zk��-���M��P���'3��������K��m���y2���@*I�_����O�~)Yu����]>9�����y��;1#8}E����X���A�E�l��������m��>���O�s�=�e��,�;����-���U���rp���^#[���+���=@|Gp���`���9�}{��2�Q����Q���1��f���rV���Rre���J"Z�vm���k�>���W]u��� ��
WT���a����G��r��1��WV�Fga%HOEDlj����������:��h$��E�HK����F�f�����H�����D��g��<~�xMMMII�8�6>-�W��1��������z*��)f�YT*������������D)���2�I��Lnn.577��onn.))���� (��90�S~��/8���F��T��
F�����q�b7���SAq��8����Fe8��2���!O���QNN��={�p���L�N'n�?��������>���_�t�=Ch@�&L�����7o���II�P�x���U7iR�L��TXX(~����1c$����O�>}�����h4r����+�1e�F��������x����J��]�u�J���p��j���b�M�x

�4g���$�+�j������d�x<���7��D$���s15M�Zt$���d��1����l� ��HI��y^���v�C��x<Dd2��vd����s*��`�����|���C�F����M&���K���.����%e��
���!o�M��6�,%���������|��"������2)���."�����NJiii�y[feeE�1���_iiiJ�R�������)--�[��c�z(6�eI�
Zl8��y���<�K� s���&���<I�i���X�J�����$���d2Yyy�4WR&�QQQ��Q�$�N�q���1:���L]��v.��������
�Je�~��f���0�!�pL�v"�w���{���������-�=f�$^�W�Q��Q6���j�������U���W��_}���9,����x�ohh��.��8{�l���q�F	�#������F��.A]eeec�������s�����1c�������3�f��]�����v���Li>���JL���{��5���\rIo��FdD�W�qa��-6
����K�e@�P�T*���r�$���T*U��#�F&���n�W���x���d��ED���!�[E.�3���%�u��jA��������E����'b����---���u��������Q(�X>9.8��]>[�8�L&e�������"*��L��4@L%qP�����222�fsHP�b�QT����~��9����c��b���������X
�RD�`0�d2i��~0�$��n���pHS������Iv1�^�dWR�B�r��\cZZZT[�����~�kkkk7l���X����|����l6[�_�cTWEEEKKKH
��8n��)�v��&�p�����VUUI�7�L�������	��|��4Haa���jjj��������gwww����C>������E���m���5��c�����K�SS_/�0���o.;}�"Z���h���������:��s�*�m��uuuIP]ff&
����b2�.��������JP���466'����������:i~R*�EEE�����O7�w�r�XPP �D���A����q������U(EEE����C-WT��w�0	b�"�T0dQ��|>_gg���n7V�I�Yy�^�� �p`:�[WWW�45j�Z��4u�JIv1srr:::���:�N"r������"M�+)��e���\���
�Vr��tZ�`A�N1��u�Vi~|m��o�Kgzb����+��{L�2��#�1���?�]kzb���IL;��
����6V�o�F����C��U���;(�z�j��w��7V4�8n���D��/��iI��2@�S�ZreZZZ^~��������_{���f�z��/^|�������1eH|��b-�s���������x��u���{���-Z�����@�)�q����_�bE[[�m����2$�T���A���d���S���1C�R�����?W�Hd�)@�COSI�!"�Xeeeeee�pS����
 �R!(k��M��t�3fLnn�^��Z��>�Z�����k��w�R��x<���T�����|KK�u�t���������	�#������p8$�������833S��JP�\./..�J���)S�����C;��l�n��Vc�V�u��y�S�T���'�y���k��Z^^^gg�4������s�9���r��8�+--�X,�\��'����T���	���t&�I���H|[��vq��X����x<!�<�HFF����{zz�����;t�P[uV����\�'������l�N����z��j�Vknn�u����r��������:�����l�e4����x�95"****))���())),,LOO�f�g�BQPP���!A]D4y�d�+��?�K�� �p�n@����KV�ZII�Lf2�:::�~�[ED�q'�u�B��������|����8�3�N���vKP�<���J�<o6�%{[�L&��.�\n4�����V��l!�$�5k�Z��8�^���%�z�7|�2�L�|�GN��2Z�V�PtuuI�V��m�q��b�����|�9��~�����G�U��`���~8�����W�TZ����C�7!IV��df����L�D�cU]]��E���*��s�=���	����v�������t�R��I/�����g#���;��s���XI��D
��9������R���T
_�����n�)����[o��h4�����oKJ��HX��$6@�BP d�n@JIKK;z�h�[<�+��'N��!1�V����$�)#�~[R��]��T���i2�J/��)|vr����I��]z)|�b*����2����>�
�	�V[__?�$J	+�����g������W^���niiy���n��1�Z�[�h��M�ZZZ�N�����3��,������p�z���b	x���D;;����D���?�bIw��{�����:�S��M�s����W U;+�T�x��Y�Z��~��T��*	�z*Af������kii9p���A�����w��r�J��6�m��m�����������y3c�����L���j�Lb^�h�<�����#���{����b�u�F���z�~z��mJ������)�Y��J���*����S��"tV�v��S���hjkkc����e2M�>��p0�f����
��/d�����o���y"��t/��2c����S(b�����\.���J�+��'�����������]TT$�I�{�V�����}��1��~z��mJ��������Y��J���J�{���S:���w��`����c_~�e�h�����1���o��a�z��c�?�x�N�Ry��a�XEE��jA���8Nb^�h�<1�.�����1�l�2�a������o~������C�nS"�l|%�H��
=U�8tV�H��O����Y%��CO#�k������{�w0�zzz��J��_~����kC����+��;�������ck���8�y����<��


���7m����,�����{��_|��w�y'���h��D8��J�+���z�$�q��������=��J�{��
 I�����VXXHD�}�Y�����������X,����
��w������n��?33��:::������>�t�����[�������7o��9P>1�@�Z��g���g�}V�R�y���1qg�����zJ����X�pa���M�p����W �;+�TIt��YI#yO?�{*Bg�<�=�D����������mc�M�0!.���3g��~�����ED��{/cL� ��W^�j��K�
D���yv!.��2�������Iw�***������P')vR��BO��7�UL����FOE�������H.]6t���Q{{{�SmmmD����f
����?��#��W�X���D�`��q��F�������[���n���������Qiy��]0��������p�X�"x���`��MIq�1�bW e:+�TIz��Bg)v�)�S:�d�w��S$,L_:��5
�����<�L�84���\�����'�U�V=��#��o���n�������o����������m�����~�������QiyR�����~��)�>����/ y�]�HnA����F�\����S%����UT����XOE�����CO�����1f6�C������9s���QgA&����.��1v���K/�4�W}�����������
�m���v���g�F"a�]�l�u��d�"�������S%��CgS�~�#��"tV	|��S$L_1���K�X,D����6E�b�|���V����w�}��	>���H^�w�^";v,%�8��'��M�:u����7o�����%Iw��u���dc*���H���S%��CgI}�#��"tVIx��S$&e�E�t����w*���"J�.I��l�����.��m��������x������r�<�7qgcc#%��b�����Kl����!��������'S�{R��BO��7�?���"yO?U{*Bg����/�T�	A�ay����h���;�O�n0�n�����v
n��e]t�������s����233�^occ�R���q���S������
D��	xv
���o��|���y*y�]_��MIq�1��W U;+�TIz����**���S��"tV�|��BO)�j���tvv������k�2�n����6m`��3f�2;v�`����w������]�v�?$��V���D�f�b�}��7a�M�{����h���:��H�+���z�d�q��b*yO?�{*Bg���=�,�?�<cl����^{��Y�^z�%����G������~�t:�c���]w��7��������{n����]w�3�<#������GK�+��'���~���1��x����&�����C��L�[)�d���Y��J���*����S���n���D)�Y����E������������]?~<�
7� ����������><}����%��V����h��������W �����u��d�%�@�wV���������d<�����U��;�T���wR�8�t��*�j��}~�����w��F�V_z��c��u�\�v���s���
)��W Z-O���D�����MIq�1��W )�����b�K.����>#���;o���D��c����;����E�UT���'������@O#Tvv�����9s�=c���TTTH��E���7�����$����Kq���[[[[[[}>�d�.Z������;y�0r:t(###�����HHN�-\���w�mllt��'N�����>k�v�&IDAT���2}s���S����������v:����_���~�3��R,�o~~�\._�|��;���l6���[o��v��wT��/�������������sggggkk���~��c�Y��(^(��Q*����z�xcl���wj����2?����nw��544L�6-�d (SVV����}���k
X�jU]]���`��\�����������������|}������N ��������
7�0y��n�a�����[n�E,aP��k�w������n�������V�Z��xcv����0P8���q#c����������������#>u������W^��s��I�������o�y��I��O���-��v���K0DS�Lc��_x�7Y�V���oc���J�IPF�T���3��y��B\��i�����k;A�����\>??_�j����a�2=��8(&d��/�K�������g�e�uuu���n��v1����)���
7� �-
�*��~�;�Y�Z-�	ev��m���1�z����a�2�����������������sss#�$��0,S�L!����]]]!O���[W\q�W\���h���'�m���l���~��GD�R��;����6l� B���	����Dt�y����?CH����?����?����'#<DA���8���������>�����|��v�=��?��f���
��O>��]����#C?
�g�}������x������^{�?��?JKK�sL�2Cg�XL&���E��b���yU8�,3��YDbHe�����O�.��IKK��������:th��������� ��:��%n�T�������^z�%n@k���Ju�8p�e���'?��a��9��sW�X�k���cs`�����1�J�2��������k��x��a"����b#�Jmm��O?�p������s��_�������C��I�
A�a�(�g������O�8q���	&Dx�C���Y��NP"��o�y��-���^����������g���\�)���~�h����7��3�X#�2�$�a���K�M����7�LD�c�����?$"�^�|�����Z�o��3f����]hi8A����j��{��'�)�Xww7
;g
�����<��sb"�w�y����wrw������?%���^�7��}����������?��Od2��?##��7�,))����~�CfH}���D������o�8N��������_u�UD�e���W
M�\s���e�1�����n����*���c��X,;;[�9g�qOQQ�����"p���������O�8�n���?����K�#zt:����E}�n�:������w����g�	���o<��3���JKK����C���n��q��
555��]�v�t��^3�h�;wnuu5;�o����(aP��F���;�����?���XDC�TTT�\��a-�3g���;C*�x<�?�|vvv4.���D@&�]x���'ONKK;t���={���CV\\<g����l��v�����J����^u�U������7������?���d��Qv������v�jll�J����?s����d/IEND�B`�
#47Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Konstantin Knizhnik (#46)
Re: Built-in connection pooling

Konstantin>I do not have explanation of performance degradation in case of
this
particular workload.

A) Mongo Java Client uses a connection-pool of 100 connections by default.
That is it does not follow "connection per client" (in YCSB terms), but it
is capped by 100 connections. I think it can be adjusted by adding
?maxPoolSize=100500 or ?maxpoolsize=100500 to the Mongo URL

I wonder if you could try to vary that parameter and see if it changes
Mongo results.

B) There's a bug in JDBC client of YCSB (it might affect PostgreSQL
results, however I'm not sure if the impact would be noticeable). The
default configuration is readallfields=true, however Jdbc client just
discards the results instead of accessing the columns. I've filed
https://github.com/brianfrankcooper/YCSB/issues/1087 for that.

C) I might miss something, however my local (Macbook) benchmarks show that
PostgreSQL 9.6 somehow uses Limit->Sort->BitmapScan kind of plans.
I have picked a "bad" userid value via auto_explain.
Jdbc client uses prepared statements, so a single bind might spoil the
whole thing causing bad plans for all the values afterwards.
Does it make sense to disable bitmap scan somehow?

For instance:

explain (analyze, buffers) select * From usertable where
YCSB_KEY>='user884845140610037639' order by YCSB_KEY limit 100;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=320.99..321.24 rows=100 width=1033) (actual time=1.408..1.429
rows=100 loops=1)
Buffers: shared hit=140
-> Sort (cost=320.99..321.33 rows=135 width=1033) (actual
time=1.407..1.419 rows=100 loops=1)
Sort Key: ycsb_key
Sort Method: quicksort Memory: 361kB
Buffers: shared hit=140
-> Bitmap Heap Scan on usertable (cost=9.33..316.22 rows=135
width=1033) (actual time=0.186..0.285 rows=167 loops=1)
Recheck Cond: ((ycsb_key)::text >=
'user884845140610037639'::text)
Heap Blocks: exact=137
Buffers: shared hit=140
-> Bitmap Index Scan on usertable_pkey (cost=0.00..9.29
rows=135 width=0) (actual time=0.172..0.172 rows=167 loops=1)
Index Cond: ((ycsb_key)::text >=
'user884845140610037639'::text)
Buffers: shared hit=3
Planning time: 0.099 ms
Execution time: 1.460 ms

vs

explain (analyze, buffers) select * From usertable where
YCSB_KEY>='user184845140610037639' order by YCSB_KEY limit 100;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.28..89.12 rows=100 width=1033) (actual time=0.174..0.257
rows=100 loops=1)
Buffers: shared hit=102
-> Index Scan using usertable_pkey on usertable (cost=0.28..2154.59
rows=2425 width=1033) (actual time=0.173..0.246 rows=100 loops=1)
Index Cond: ((ycsb_key)::text >= 'user184845140610037639'::text)
Buffers: shared hit=102
Planning time: 0.105 ms
Execution time: 0.277 ms

Vladimir

#48Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#27)
1 attachment(s)
Re: Built-in connection pooling

Attached please find new version of built-in connection pooling
supporting temporary tables and session GUCs.
Also Win32 support was added.

--

Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-5.patchtext/x-patch; name=session_pool-5.patchDownload
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 93c4bbf..dfc072c 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -194,6 +194,7 @@ char	   *namespace_search_path = NULL;
 /* Local functions */
 static void recomputeNamespacePath(void);
 static void InitTempTableNamespace(void);
+static Oid  GetTempTableNamespace(void);
 static void RemoveTempRelations(Oid tempNamespaceId);
 static void RemoveTempRelationsCallback(int code, Datum arg);
 static void NamespaceCallback(Datum arg, int cacheid, uint32 hashvalue);
@@ -441,9 +442,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (strcmp(newRelation->schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(newRelation->schemaname, false);
@@ -452,9 +451,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 	else if (newRelation->relpersistence == RELPERSISTENCE_TEMP)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 	else
 	{
@@ -463,8 +460,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -2902,9 +2898,7 @@ LookupCreationNamespace(const char *nspname)
 	if (strcmp(nspname, "pg_temp") == 0)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 
 	namespaceId = get_namespace_oid(nspname, false);
@@ -2967,9 +2961,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (strcmp(schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(schemaname, false);
@@ -2982,8 +2974,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -3250,8 +3241,11 @@ void
 SetTempNamespaceState(Oid tempNamespaceId, Oid tempToastNamespaceId)
 {
 	/* Worker should not have created its own namespaces ... */
-	Assert(myTempNamespace == InvalidOid);
-	Assert(myTempToastNamespace == InvalidOid);
+	if (!ActiveSession)
+	{
+		Assert(myTempNamespace == InvalidOid);
+		Assert(myTempToastNamespace == InvalidOid);
+	}
 	Assert(myTempNamespaceSubID == InvalidSubTransactionId);
 
 	/* Assign same namespace OIDs that leader has */
@@ -3771,6 +3765,22 @@ recomputeNamespacePath(void)
 	list_free(oidlist);
 }
 
+static Oid
+GetTempTableNamespace(void)
+{
+	if (ActiveSession)
+	{
+		if (!OidIsValid(ActiveSession->tempNamespace))
+			InitTempTableNamespace();
+	}
+	else
+	{
+		if (!OidIsValid(myTempNamespace))
+			InitTempTableNamespace();
+	}
+	return myTempNamespace;
+}
+
 /*
  * InitTempTableNamespace
  *		Initialize temp table namespace on first use in a particular backend
@@ -3782,8 +3792,6 @@ InitTempTableNamespace(void)
 	Oid			namespaceId;
 	Oid			toastspaceId;
 
-	Assert(!OidIsValid(myTempNamespace));
-
 	/*
 	 * First, do permission check to see if we are authorized to make temp
 	 * tables.  We use a nonstandard error message here since "databasename:
@@ -3822,7 +3830,10 @@ InitTempTableNamespace(void)
 				(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
 				 errmsg("cannot create temporary tables during a parallel operation")));
 
-	snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d_%s", MyBackendId, ActiveSession->id);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
 
 	namespaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(namespaceId))
@@ -3854,8 +3865,10 @@ InitTempTableNamespace(void)
 	 * it. (We assume there is no need to clean it out if it does exist, since
 	 * dropping a parent table should make its toast table go away.)
 	 */
-	snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d",
-			 MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d_%s", MyBackendId, ActiveSession->id);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d", MyBackendId);
 
 	toastspaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(toastspaceId))
@@ -3873,7 +3886,11 @@ InitTempTableNamespace(void)
 	 */
 	myTempNamespace = namespaceId;
 	myTempToastNamespace = toastspaceId;
-
+	if (ActiveSession)
+	{
+		ActiveSession->tempNamespace = namespaceId;
+		ActiveSession->tempToastNamespace = toastspaceId;
+	}
 	/* It should not be done already. */
 	AssertState(myTempNamespaceSubID == InvalidSubTransactionId);
 	myTempNamespaceSubID = GetCurrentSubTransactionId();
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index cff49ba..b728ab1 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -25,6 +25,7 @@
 #include "access/xloginsert.h"
 #include "access/xlogutils.h"
 #include "catalog/catalog.h"
+#include "catalog/namespace.h"
 #include "catalog/storage.h"
 #include "catalog/storage_xlog.h"
 #include "storage/freespace.h"
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..8e8a737 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,32 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+/*
+ * Drop all statements prepared in the specified session.
+ */
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..7f40edb 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,17 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..fa4b4a9
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,143 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(sock, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..7fd901f 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -726,3 +726,65 @@ pgwin32_socket_strerror(int err)
 	}
 	return wserrbuf;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..473a1d4 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -182,6 +183,15 @@ typedef struct bkend
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
+/*
+ * Pointer in backend list used to implement round-robin distribution of sessions through backends.
+ * This variable either NULL, either points to the normal backend.
+ */
+static Backend*   BackendListClockPtr;
+/*
+ * Number of active normal backends
+ */
+static int        nNormalBackends;
 
 #ifdef EXEC_BACKEND
 static Backend *ShmemBackendArray;
@@ -412,7 +422,6 @@ static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
 static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -485,6 +494,7 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket sessionsocket;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -568,6 +578,22 @@ HANDLE		PostmasterHandle;
 #endif
 
 /*
+ * Move current backend pointer to the next normal backend.
+ * This function is called either when new session is started to implement round-robin policy, either when backend pointer by BackendListClockPtr is terminated
+ */
+static void AdvanceBackendListClockPtr(void)
+{
+	Backend* b = BackendListClockPtr;
+	do {
+		dlist_node* node = &b->elem;
+		node = node->next ? node->next : BackendList.head.next;
+		b = dlist_container(Backend, elem, node);
+	} while (b->bkend_type != BACKEND_TYPE_NORMAL && b != BackendListClockPtr);
+
+	BackendListClockPtr = b;
+}
+
+/*
  * Postmaster main entry point
  */
 void
@@ -1944,8 +1970,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -2043,7 +2069,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2099,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2449,7 +2475,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -3236,6 +3262,24 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL)
+	{
+		if (bp == BackendListClockPtr)
+			AdvanceBackendListClockPtr();
+		if (bp->session_send_sock != PGINVALID_SOCKET)
+			close(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+		nNormalBackends -= 1;
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3356,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3458,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4017,6 +4059,20 @@ BackendStartup(Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	int         session_pipe[2];
+
+	if (SessionPoolSize != 0 && nNormalBackends >= SessionPoolSize)
+	{
+		/* In case of session pooling instead of spawning new backend open new session at one of the existed backends. */
+		Assert(BackendListClockPtr && BackendListClockPtr->session_send_sock != PGINVALID_SOCKET);
+		elog(DEBUG2, "Start new session for socket %d at backend %d total %d", port->sock, BackendListClockPtr->pid, nNormalBackends);
+		/* Send connection socket to the backend pointed by BackendListClockPtr */
+		if (pg_send_sock(BackendListClockPtr->session_send_sock, port->sock, BackendListClockPtr->pid) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+		AdvanceBackendListClockPtr(); /* round-robin backends */
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
@@ -4030,7 +4086,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4118,28 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (SessionPoolSize != 0)
+	{
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+#ifdef WIN32
+		SessionPoolSock = session_pipe[0];
+#endif
+	}
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (SessionPoolSize != 0)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4181,19 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	if (SessionPoolSize != 0)
+	{
+		bn->session_send_sock = session_pipe[1]; /* Use this socket for sending client session socket descriptor */
+		close(session_pipe[0]); /* Close unused end of the pipe */
+	}
+	else
+		bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
-
+	if (BackendListClockPtr == NULL)
+		BackendListClockPtr = bn;
+	nNormalBackends += 1;
+	elog(DEBUG2, "Start backend %d total %d", pid, nNormalBackends);
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4380,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
@@ -6033,6 +6114,9 @@ save_backend_variables(BackendParameters *param, Port *port,
 	if (!write_inheritable_socket(&param->portsocket, port->sock, childPid))
 		return false;
 
+	if (!write_inheritable_socket(&param->sessionsocket, SessionPoolSock, childPid))
+		return false;
+
 	strlcpy(param->DataDir, DataDir, MAXPGPATH);
 
 	memcpy(&param->ListenSocket, &ListenSocket, sizeof(ListenSocket));
@@ -6265,6 +6349,7 @@ restore_backend_variables(BackendParameters *param, Port *port)
 {
 	memcpy(port, &param->port, sizeof(Port));
 	read_inheritable_socket(&port->sock, &param->portsocket);
+	read_inheritable_socket(&SessionPoolSock, &param->sessionsocket);
 
 	SetDataDir(param->DataDir);
 
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..13d26fc 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,9 +130,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,6 +669,7 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
 	Assert(set->nevents < set->nevents_space);
@@ -690,8 +693,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,15 +732,38 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+			WaitEventAdjustWin32(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,9 +811,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -827,14 +864,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -865,9 +921,24 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->events[set->nevents + 1];
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -880,7 +951,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -1296,7 +1367,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..bf704d5 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -40,6 +40,7 @@
 #include "access/printtup.h"
 #include "access/xact.h"
 #include "catalog/pg_type.h"
+#include "catalog/namespace.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
 #include "libpq/libpq.h"
@@ -75,9 +76,9 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
-
 /* ----------------
  *		global variables
  * ----------------
@@ -98,6 +99,10 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */
+pgsocket    SessionPoolSock = PGINVALID_SOCKET;
+/* Pointer to the active session */
+SessionContext* ActiveSession;
 
 
 /* ----------------
@@ -169,6 +174,12 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet*   SessionPool;    /* Set of all sessions sockets */
+static int64           SessionCount;   /* Number of sessions */
+static Port*           BackendPort;    /* Reference to the original port of this backend created when this backend was launched.
+										* Session using this port may be already terminated, but since it is allocated in TopMemoryContext,
+										* its content is still valid and is used as template for ports of new sessions */
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +205,27 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return pstrdup(buf);
+}
+
+/*
+ * Free all memory associated with session and delete session object itself
+ */
+static void DeleteSession(SessionContext* session)
+{
+	elog(DEBUG1, "Delete session %p, id=%s,  memory context=%p", session, session->id, session->memory);
+	RestoreSessionGUCs(session);
+	ReleaseSessionGUCs(session);
+	MemoryContextDelete(session->memory);
+	free(session);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1232,6 +1264,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1541,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2369,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3653,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3703,21 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session for this backend in case of session pooling */
+	if (SessionPoolSize != 0)
+	{
+		MemoryContext oldcontext;
+		ActiveSession = (SessionContext*)calloc(1, sizeof(SessionContext));
+		ActiveSession->memory = AllocSetContextCreate(TopMemoryContext,
+													   "SessionMemoryContext",
+													   ALLOCSET_DEFAULT_SIZES);
+		oldcontext = MemoryContextSwitchTo(ActiveSession->memory);
+		ActiveSession->id = CreateSessionId();
+		ActiveSession->port = MyProcPort;
+		BackendPort = MyProcPort;
+		MemoryContextSwitchTo(oldcontext);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3847,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4133,150 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling, rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				if (SessionPool == NULL)
+				{
+					/* Construct wait event set if not constructed yet */
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions);
+					/* Add event to detect postmaster death */
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, ActiveSession);
+					/* Add event for backends latch */
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, ActiveSession);
+					/* Add event for accepting new sessions */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, SessionPoolSock, NULL, ActiveSession);
+					/* Add event for current session */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, ActiveSession);
+				}
+			  ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */ 
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client.fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */ 
+					int		 status;
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock < 0)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					session = (SessionContext*)calloc(1, sizeof(SessionContext));
+					session->memory = AllocSetContextCreate(TopMemoryContext,
+															"SessionMemoryContext",
+															ALLOCSET_DEFAULT_SIZES);
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					port = palloc(sizeof(Port));
+					memcpy(port, BackendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port->sock = sock;
+					session->port = port;
+					session->id = CreateSessionId();
+
+					MyProcPort = port;
+					status = ProcessStartupPacket(port, false, session->memory);
+					MemoryContextSwitchTo(oldcontext);
+
+					/*
+					 * TODO: Currently we assume that all sessions are accessing the same database under the same user.
+					 * Just report an error if  it is not true
+					 */
+					if (strcmp(port->database_name, MyProcPort->database_name) ||
+						strcmp(port->user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port->database_name, port->user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK)
+					{
+						elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+							 sock, MyProcPid, port->database_name, port->user_name);
+						RestoreSessionGUCs(ActiveSession);
+						ActiveSession = session;
+						AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, session);
+
+						SetCurrentStatementStartTimestamp();
+						StartTransactionCommand();
+						PerformAuthentication(MyProcPort);
+						CommitTransactionCommand();
+
+						/*
+						 * Send GUC options to the client
+						 */
+						BeginReportingGUCOptions();
+
+						/*
+						 * Send this backend's cancellation info to the frontend.
+						 */
+						pq_beginmessage(&buf, 'K');
+						pq_sendint32(&buf, (int32) MyProcPid);
+						pq_sendint32(&buf, (int32) MyCancelKey);
+						pq_endmessage(&buf);
+
+						/* Need not flush since ReadyForQuery will do it. */
+						send_ready_for_query = true;
+						continue;
+					}
+					else
+					{
+						/* Error while processing of startup package
+						 * Reject this session and return back to listening sockets
+						 */
+						DeleteSession(session);
+						elog(LOG, "Session startup failed");
+						close(sock);
+						goto ChooseSession;
+					}
+				}
+				else
+				{
+					SessionContext* newSession = (SessionContext*)ready_client.user_data;
+					if (ActiveSession != newSession)
+					{
+						elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+						RestoreSessionGUCs(ActiveSession);
+						ActiveSession = newSession;
+						RestoreSessionGUCs(ActiveSession);
+						MyProcPort = ActiveSession->port;
+						SetTempNamespaceState(ActiveSession->tempNamespace, ActiveSession->tempToastNamespace);
+					}
+				}
+			}
 		}
 
 		/*
@@ -4350,6 +4558,39 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+					pq_getmsgend(&input_message);
+					if (pq_is_reading_msg())
+						pq_endmsgread();
+
+					close(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+					MyProcPort = NULL;
+
+					if (ActiveSession)
+					{
+						DropSessionPreparedStatements(ActiveSession->id);
+						DeleteSession(ActiveSession);
+						ActiveSession = NULL;
+					}
+					whereToSendOutput = DestRemote;
+					/* Need to perform rescheduling to some other session or accept new session */
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..b2f43a8 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,9 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..58258b4 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,29 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
@@ -5104,6 +5127,95 @@ NewGUCNestLevel(void)
 }
 
 /*
+ * Set GUCs for this session
+ */
+void
+RestoreSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	if (session == NULL)
+		return;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		void* old_extra = sg->var->extra;
+		sg->var->extra = sg->val.extra;
+		switch (sg->var->vartype)
+		{
+		  case PGC_BOOL:
+		  {
+			  struct config_bool *conf = (struct config_bool*)sg->var;
+			  bool oldval = *conf->variable;
+			  *conf->variable = sg->val.val.boolval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.boolval, sg->val.extra);
+			  sg->val.val.boolval = oldval;
+			  break;
+		  }
+		  case PGC_INT:
+		  {
+			  struct config_int *conf = (struct config_int*)sg->var;
+			  int oldval = *conf->variable;
+			  *conf->variable = sg->val.val.intval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.intval, sg->val.extra);
+			  sg->val.val.intval = oldval;
+			  break;
+		  }
+		  case PGC_REAL:
+		  {
+			  struct config_real *conf = (struct config_real*)sg->var;
+			  double oldval = *conf->variable;
+			  *conf->variable = sg->val.val.realval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.realval, sg->val.extra);
+			  sg->val.val.realval = oldval;
+			  break;
+		  }
+		  case PGC_STRING:
+		  {
+			  struct config_string *conf = (struct config_string*)sg->var;
+			  char* oldval = *conf->variable;
+			  *conf->variable = sg->val.val.stringval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.stringval, sg->val.extra);
+			  sg->val.val.stringval = oldval;
+			  break;
+		  }
+		  case PGC_ENUM:
+		  {
+			  struct config_enum *conf = (struct config_enum*)sg->var;
+			  int oldval = *conf->variable;
+			  *conf->variable = sg->val.val.enumval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.enumval, sg->val.extra);
+			  sg->val.val.enumval = oldval;
+			  break;
+		  }
+		}
+		sg->val.extra = old_extra;
+	}
+}
+
+/*
+ * Deallocate memory for session GUCs
+ */
+void
+ReleaseSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		if (sg->val.extra)
+			set_extra_field(sg->var, &sg->val.extra, NULL);
+		if (sg->var->vartype == PGC_STRING)
+		{
+			struct config_string* conf = (struct config_string*)sg->var;
+			set_string_field(conf, &sg->val.val.stringval, NULL);
+		}
+	}
+}
+
+/*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
  * transient assignment to some GUC variables.  (The name is thus a bit of
@@ -5172,7 +5284,42 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 				else if (stack->state == GUC_SET)
 				{
 					/* we keep the current active value */
-					discard_stack_value(gconf, &stack->prior);
+					if (ActiveSession)
+					{
+						SessionGUC* sg;
+						for (sg = ActiveSession->gucs; sg != NULL && sg->var != gconf; sg = sg->next);
+						if (sg == NULL)
+						{
+							sg = MemoryContextAllocZero(ActiveSession->memory,
+														sizeof(SessionGUC));
+							sg->var = gconf;
+							sg->next = ActiveSession->gucs;
+							ActiveSession->gucs = sg;
+						}
+						switch (gconf->vartype)
+						{
+						  case PGC_BOOL:
+							sg->val.val.boolval = stack->prior.val.boolval;
+							break;
+						  case PGC_INT:
+							sg->val.val.intval = stack->prior.val.intval;
+							break;
+						  case PGC_REAL:
+							sg->val.val.realval = stack->prior.val.realval;
+							break;
+						  case PGC_STRING:
+							sg->val.val.stringval = stack->prior.val.stringval;
+							break;
+						  case PGC_ENUM:
+							sg->val.val.enumval = stack->prior.val.enumval;
+							break;
+						}
+						sg->val.extra = stack->prior.extra;
+					}
+					else
+					{
+						discard_stack_value(gconf, &stack->prior);
+					}
 				}
 				else			/* must be GUC_LOCAL */
 					restorePrior = true;
@@ -5197,8 +5344,8 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 
 					case GUC_SET:
 						/* next level always becomes SET */
-						discard_stack_value(gconf, &stack->prior);
-						if (prev->state == GUC_SET_LOCAL)
+					    discard_stack_value(gconf, &stack->prior);
+					    if (prev->state == GUC_SET_LOCAL)
 							discard_stack_value(gconf, &prev->masked);
 						prev->state = GUC_SET;
 						break;
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..a9f9228 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,8 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -420,6 +422,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..8a0ac98 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index d31c28f..e667434 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -447,6 +447,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -456,6 +457,7 @@ int			pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen);
 int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout);
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 const char *pgwin32_socket_strerror(int err);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 5c19a61..11eded3 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -21,6 +21,7 @@
 #include "storage/lock.h"
 #include "storage/pg_sema.h"
 #include "storage/proclist_types.h"
+#include "utils/guc_tables.h"
 
 /*
  * Each backend advertises up to PGPROC_MAX_CACHED_SUBXIDS TransactionIds
@@ -273,6 +274,29 @@ extern PGDLLIMPORT PROC_HDR *ProcGlobal;
 
 extern PGPROC *PreparedXactProcs;
 
+typedef struct SessionGUC
+{
+	struct SessionGUC* next;
+	config_var_value   val;
+	struct config_generic *var;
+} SessionGUC;
+
+/*
+ * Information associated with client session
+ */
+typedef struct SessionContext
+{
+	MemoryContext memory; /* memory context used for global session data (replacement of TopMemoryContext) */
+	struct Port* port;           /* connection port */
+	char*        id;             /* session identifier used to construct unique prepared statement names */
+	Oid          tempNamespace;  /* temporary namespace */
+	Oid          tempToastNamespace;  /* temporary toast namespace */
+	SessionGUC*  gucs;
+} SessionContext;
+
+
+extern PGDLLIMPORT SessionContext *ActiveSession; 
+
 /* Accessor for PGPROC given a pgprocno. */
 #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])
 
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..191eeaa 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -34,6 +34,7 @@ extern CommandDest whereToSendOutput;
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 77daa5a..86e89e8 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -394,6 +394,12 @@ extern Size EstimateGUCStateSpace(void);
 extern void SerializeGUCState(Size maxsize, char *start_address);
 extern void RestoreGUCState(void *gucstate);
 
+/* Session polling support function */
+struct SessionContext;
+extern void RestoreSessionGUCs(struct SessionContext* session);
+extern void ReleaseSessionGUCs(struct SessionContext* session);
+
+
 /* Support for messages reported from GUC check hooks */
 
 extern PGDLLIMPORT char *GUC_check_errmsg_string;
#49Shay Rojansky
roji@roji.org
In reply to: Konstantin Knizhnik (#48)
Re: Built-in connection pooling

Am a bit late to this thread, sorry if I'm slightly rehashing things. I'd
like to go back to the basic on this.

Unless I'm mistaken, at least in the Java and .NET world, clients are
almost always expected to have their own connection pooling, either
implemented inside the driver (ADO.NET model) or as a separate modular
component (JDBC). This approach has a few performance advantages:

1. "Opening" a new pooled connection is virtually free - no TCP connection
needs to be opened, no I/O, no startup packet, nothing (only a tiny bit of
synchronization).
2. Important client state can be associated to physical connections. For
example, prepared statements can be tracked on the physical connection, and
persisted when the connection is returned to the pool. The next time the
physical connection is returned from the pool, if the user tries to
server-prepare a statement, we can check on the connection if it has
already been prepared in a "previous lifetime", and if so, no need to
prepare again. This is vital for scenarios with short-lived (pooled)
connections, such as web. Npgsql does this.

Regarding the problem of idle connections being kept open by clients, I'd
argue it's a client-side problem. If the client is using a connection pool,
the pool should be configurable to close idle connections after a certain
time (I think this is relatively standard behavior). If the client isn't
using a pool, it seems to be the application's responsibility to release
connections when they're no longer needed.

The one drawback is that the pooling is application-specific, so it can't
be shared by multiple applications/hosts. So in some scenarios it may make
sense to use both client pooling and proxy/server pooling.

To sum it up, I would argue that connection pooling should first and
foremost be considered as a client feature, rather than a proxy feature
(pgpool) or server feature (the PostgreSQL pooling being discussed here).
This isn't to say server-side pooling has no value though.

#50Ryan Pedela
rpedela@datalanche.com
In reply to: Shay Rojansky (#49)
Re: Built-in connection pooling

On Fri, Feb 9, 2018 at 4:14 PM, Shay Rojansky <roji@roji.org> wrote:

Am a bit late to this thread, sorry if I'm slightly rehashing things. I'd
like to go back to the basic on this.

Unless I'm mistaken, at least in the Java and .NET world, clients are
almost always expected to have their own connection pooling, either
implemented inside the driver (ADO.NET model) or as a separate modular
component (JDBC). This approach has a few performance advantages:

1. "Opening" a new pooled connection is virtually free - no TCP connection
needs to be opened, no I/O, no startup packet, nothing (only a tiny bit of
synchronization).
2. Important client state can be associated to physical connections. For
example, prepared statements can be tracked on the physical connection, and
persisted when the connection is returned to the pool. The next time the
physical connection is returned from the pool, if the user tries to
server-prepare a statement, we can check on the connection if it has
already been prepared in a "previous lifetime", and if so, no need to
prepare again. This is vital for scenarios with short-lived (pooled)
connections, such as web. Npgsql does this.

Regarding the problem of idle connections being kept open by clients, I'd
argue it's a client-side problem. If the client is using a connection pool,
the pool should be configurable to close idle connections after a certain
time (I think this is relatively standard behavior). If the client isn't
using a pool, it seems to be the application's responsibility to release
connections when they're no longer needed.

The one drawback is that the pooling is application-specific, so it can't
be shared by multiple applications/hosts. So in some scenarios it may make
sense to use both client pooling and proxy/server pooling.

To sum it up, I would argue that connection pooling should first and
foremost be considered as a client feature, rather than a proxy feature
(pgpool) or server feature (the PostgreSQL pooling being discussed here).
This isn't to say server-side pooling has no value though.

Recently, I did a large amount of parallel data processing where the
results were stored in PG. I had about 1000 workers each with their own PG
connection. As you pointed out, application pooling doesn't make sense in
this scenario. I tried pgpool and pgbouncer, and both ended up as the
bottleneck. Overall throughput was not great but it was highest without a
pooler. That aligns with Konstantin's benchmarks too. As far as I know,
server pooling is the only solution to increase throughput, without
upgrading hardware, for this use case.

I hope this PR gets accepted!

#51Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#48)
1 attachment(s)
Re: Built-in connection pooling

Attached please find new version of the patch with  several bug fixes +
support of more than one session pools associated with different ports.
Now it is possible to make postmaster listen several ports for accepting
pooled connections, while leaving main Postgres port for dedicated backends.
Each session pool is intended to be used for particular database/user
combination.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-7.patchtext/x-patch; name=session_pool-7.patchDownload
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 93c4bbf..dfc072c 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -194,6 +194,7 @@ char	   *namespace_search_path = NULL;
 /* Local functions */
 static void recomputeNamespacePath(void);
 static void InitTempTableNamespace(void);
+static Oid  GetTempTableNamespace(void);
 static void RemoveTempRelations(Oid tempNamespaceId);
 static void RemoveTempRelationsCallback(int code, Datum arg);
 static void NamespaceCallback(Datum arg, int cacheid, uint32 hashvalue);
@@ -441,9 +442,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (strcmp(newRelation->schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(newRelation->schemaname, false);
@@ -452,9 +451,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 	else if (newRelation->relpersistence == RELPERSISTENCE_TEMP)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 	else
 	{
@@ -463,8 +460,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -2902,9 +2898,7 @@ LookupCreationNamespace(const char *nspname)
 	if (strcmp(nspname, "pg_temp") == 0)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 
 	namespaceId = get_namespace_oid(nspname, false);
@@ -2967,9 +2961,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (strcmp(schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(schemaname, false);
@@ -2982,8 +2974,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -3250,8 +3241,11 @@ void
 SetTempNamespaceState(Oid tempNamespaceId, Oid tempToastNamespaceId)
 {
 	/* Worker should not have created its own namespaces ... */
-	Assert(myTempNamespace == InvalidOid);
-	Assert(myTempToastNamespace == InvalidOid);
+	if (!ActiveSession)
+	{
+		Assert(myTempNamespace == InvalidOid);
+		Assert(myTempToastNamespace == InvalidOid);
+	}
 	Assert(myTempNamespaceSubID == InvalidSubTransactionId);
 
 	/* Assign same namespace OIDs that leader has */
@@ -3771,6 +3765,22 @@ recomputeNamespacePath(void)
 	list_free(oidlist);
 }
 
+static Oid
+GetTempTableNamespace(void)
+{
+	if (ActiveSession)
+	{
+		if (!OidIsValid(ActiveSession->tempNamespace))
+			InitTempTableNamespace();
+	}
+	else
+	{
+		if (!OidIsValid(myTempNamespace))
+			InitTempTableNamespace();
+	}
+	return myTempNamespace;
+}
+
 /*
  * InitTempTableNamespace
  *		Initialize temp table namespace on first use in a particular backend
@@ -3782,8 +3792,6 @@ InitTempTableNamespace(void)
 	Oid			namespaceId;
 	Oid			toastspaceId;
 
-	Assert(!OidIsValid(myTempNamespace));
-
 	/*
 	 * First, do permission check to see if we are authorized to make temp
 	 * tables.  We use a nonstandard error message here since "databasename:
@@ -3822,7 +3830,10 @@ InitTempTableNamespace(void)
 				(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
 				 errmsg("cannot create temporary tables during a parallel operation")));
 
-	snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d_%s", MyBackendId, ActiveSession->id);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
 
 	namespaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(namespaceId))
@@ -3854,8 +3865,10 @@ InitTempTableNamespace(void)
 	 * it. (We assume there is no need to clean it out if it does exist, since
 	 * dropping a parent table should make its toast table go away.)
 	 */
-	snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d",
-			 MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d_%s", MyBackendId, ActiveSession->id);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d", MyBackendId);
 
 	toastspaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(toastspaceId))
@@ -3873,7 +3886,11 @@ InitTempTableNamespace(void)
 	 */
 	myTempNamespace = namespaceId;
 	myTempToastNamespace = toastspaceId;
-
+	if (ActiveSession)
+	{
+		ActiveSession->tempNamespace = namespaceId;
+		ActiveSession->tempToastNamespace = toastspaceId;
+	}
 	/* It should not be done already. */
 	AssertState(myTempNamespaceSubID == InvalidSubTransactionId);
 	myTempNamespaceSubID = GetCurrentSubTransactionId();
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index cff49ba..b728ab1 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -25,6 +25,7 @@
 #include "access/xloginsert.h"
 #include "access/xlogutils.h"
 #include "catalog/catalog.h"
+#include "catalog/namespace.h"
 #include "catalog/storage.h"
 #include "catalog/storage_xlog.h"
 #include "storage/freespace.h"
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..8e8a737 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,32 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+/*
+ * Drop all statements prepared in the specified session.
+ */
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..7f40edb 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,17 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..5f3f929
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,151 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..7fd901f 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -726,3 +726,65 @@ pgwin32_socket_strerror(int err)
 	}
 	return wserrbuf;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..707b880 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -179,6 +180,8 @@ typedef struct bkend
 	bool		dead_end;		/* is it going to send an error and quit? */
 	bool		bgworker_notify;	/* gets bgworker start/stop notifications */
 	dlist_node	elem;			/* list link in BackendList */
+	int         session_pool_id;    /* identifier of backends session pool */
+	int         worker_id;      /* identifier of worker within session pool */
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
@@ -189,7 +192,14 @@ static Backend *ShmemBackendArray;
 
 BackgroundWorker *MyBgworkerEntry = NULL;
 
+typedef struct PostmasterSessionPool
+{
+	Backend** workers; /* pool backends */
+	int n_workers;    /* number of launched worker backends in this pool so far */
+	int rr_index;     /* index of current backends used to implement round-robin distribution of sessions through backends. */
+} PostmasterSessionPool;
 
+static PostmasterSessionPool SessionPools[MAX_SESSION_PORTS];
 
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
@@ -213,7 +223,7 @@ int			ReservedBackends;
 
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
-static pgsocket ListenSocket[MAXLISTEN];
+static pgsocket ListenSocket[MAX_SESSION_PORTS][MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -411,8 +421,7 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
+static int	BackendStartup(Port *port, int session_pool_id);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -485,8 +494,9 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket sessionsocket;
 	char		DataDir[MAXPGPATH];
-	pgsocket	ListenSocket[MAXLISTEN];
+	pgsocket	ListenSocket[MAX_SESSION_PORTS][MAXLISTEN];
 	int32		MyCancelKey;
 	int			MyPMChildSlot;
 #ifndef WIN32
@@ -577,7 +587,7 @@ PostmasterMain(int argc, char *argv[])
 	int			status;
 	char	   *userDoption = NULL;
 	bool		listen_addr_saved = false;
-	int			i;
+	int			i, j;
 	char	   *output_config_variable = NULL;
 
 	MyProcPid = PostmasterPid = getpid();
@@ -990,8 +1000,9 @@ PostmasterMain(int argc, char *argv[])
 	 * First, mark them all closed, and set up an on_proc_exit function that's
 	 * charged with closing the sockets again at postmaster shutdown.
 	 */
-	for (i = 0; i < MAXLISTEN; i++)
-		ListenSocket[i] = PGINVALID_SOCKET;
+	for (i = 0; i <= SessionPoolPorts; i++)
+		for (j = 0; j < MAXLISTEN; j++)
+			ListenSocket[i][j] = PGINVALID_SOCKET;
 
 	on_proc_exit(CloseServerPorts, 0);
 
@@ -1019,33 +1030,35 @@ PostmasterMain(int argc, char *argv[])
 		{
 			char	   *curhost = (char *) lfirst(l);
 
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i <= SessionPoolPorts; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) PostPortNumber + i,
+											  NULL,
+											  ListenSocket[i], MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) PostPortNumber + i,
+											  NULL,
+											  ListenSocket[i], MAXLISTEN);
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any TCP/IP sockets")));
@@ -1056,7 +1069,7 @@ PostmasterMain(int argc, char *argv[])
 
 #ifdef USE_BONJOUR
 	/* Register for Bonjour only if we opened TCP socket(s) */
-	if (enable_bonjour && ListenSocket[0] != PGINVALID_SOCKET)
+	if (enable_bonjour && ListenSocket[0][0] != PGINVALID_SOCKET)
 	{
 		DNSServiceErrorType err;
 
@@ -1117,24 +1130,26 @@ PostmasterMain(int argc, char *argv[])
 		{
 			char	   *socketdir = (char *) lfirst(l);
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i <= SessionPoolPorts; i++)
 			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) PostPortNumber + i,
+										  socketdir,
+										  ListenSocket[i], MAXLISTEN);
+
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1147,7 +1162,7 @@ PostmasterMain(int argc, char *argv[])
 	/*
 	 * check that we have some socket to listen on
 	 */
-	if (ListenSocket[0] == PGINVALID_SOCKET)
+	if (ListenSocket[0][0] == PGINVALID_SOCKET)
 		ereport(FATAL,
 				(errmsg("no socket created for listening")));
 
@@ -1379,7 +1394,7 @@ PostmasterMain(int argc, char *argv[])
 static void
 CloseServerPorts(int status, Datum arg)
 {
-	int			i;
+	int			i, j;
 
 	/*
 	 * First, explicitly close all the socket FDs.  We used to just let this
@@ -1387,12 +1402,15 @@ CloseServerPorts(int status, Datum arg)
 	 * before we remove the postmaster.pid lockfile; otherwise there's a race
 	 * condition if a new postmaster wants to re-use the TCP port number.
 	 */
-	for (i = 0; i < MAXLISTEN; i++)
+	for (i = 0; i <= SessionPoolPorts; i++)
 	{
-		if (ListenSocket[i] != PGINVALID_SOCKET)
+		for (j = 0; j < MAXLISTEN; j++)
 		{
-			StreamClose(ListenSocket[i]);
-			ListenSocket[i] = PGINVALID_SOCKET;
+			if (ListenSocket[i][j] != PGINVALID_SOCKET)
+			{
+				StreamClose(ListenSocket[i][j]);
+				ListenSocket[i][j] = PGINVALID_SOCKET;
+			}
 		}
 	}
 
@@ -1741,27 +1759,30 @@ ServerLoop(void)
 		 */
 		if (selres > 0)
 		{
-			int			i;
+			int			i, j;
 
-			for (i = 0; i < MAXLISTEN; i++)
+			for (i = 0; i <= SessionPoolPorts; i++)
 			{
-				if (ListenSocket[i] == PGINVALID_SOCKET)
-					break;
-				if (FD_ISSET(ListenSocket[i], &rmask))
+				for (j = 0; j < MAXLISTEN; j++)
 				{
-					Port	   *port;
-
-					port = ConnCreate(ListenSocket[i]);
-					if (port)
+					if (ListenSocket[i][j] == PGINVALID_SOCKET)
+						break;
+					if (FD_ISSET(ListenSocket[i][j], &rmask))
 					{
-						BackendStartup(port);
-
-						/*
-						 * We no longer need the open socket or port structure
-						 * in this process
-						 */
-						StreamClose(port->sock);
-						ConnFree(port);
+						Port	   *port;
+
+						port = ConnCreate(ListenSocket[i][j]);
+						if (port)
+						{
+							BackendStartup(port, i);
+
+							/*
+							 * We no longer need the open socket or port structure
+							 * in this process
+							 */
+							StreamClose(port->sock);
+							ConnFree(port);
+						}
 					}
 				}
 			}
@@ -1913,20 +1934,23 @@ static int
 initMasks(fd_set *rmask)
 {
 	int			maxsock = -1;
-	int			i;
+	int			i, j;
 
 	FD_ZERO(rmask);
 
-	for (i = 0; i < MAXLISTEN; i++)
+	for (i = 0; i <= SessionPoolPorts; i++)
 	{
-		int			fd = ListenSocket[i];
+		for (j = 0; j < MAXLISTEN; j++)
+		{
+			int			fd = ListenSocket[i][j];
 
-		if (fd == PGINVALID_SOCKET)
-			break;
-		FD_SET(fd, rmask);
+			if (fd == PGINVALID_SOCKET)
+				break;
+			FD_SET(fd, rmask);
 
-		if (fd > maxsock)
-			maxsock = fd;
+			if (fd > maxsock)
+				maxsock = fd;
+		}
 	}
 
 	return maxsock + 1;
@@ -1944,8 +1968,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -1978,7 +2002,6 @@ ProcessStartupPacket(Port *port, bool SSLdone)
 				 errmsg("invalid length of startup packet")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Allocate at least the size of an old-style startup packet, plus one
 	 * extra byte, and make sure all are zeroes.  This ensures we will have
@@ -2043,7 +2066,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2096,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2099,7 +2122,6 @@ retry1:
 			if (valoffset >= len)
 				break;			/* missing value, will complain below */
 			valptr = ((char *) buf) + valoffset;
-
 			if (strcmp(nameptr, "database") == 0)
 				port->database_name = pstrdup(valptr);
 			else if (strcmp(nameptr, "user") == 0)
@@ -2449,7 +2471,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -2498,7 +2520,7 @@ ConnFree(Port *conn)
 void
 ClosePostmasterPorts(bool am_syslogger)
 {
-	int			i;
+	int			i, j;
 
 #ifndef WIN32
 
@@ -2515,12 +2537,15 @@ ClosePostmasterPorts(bool am_syslogger)
 #endif
 
 	/* Close the listen sockets */
-	for (i = 0; i < MAXLISTEN; i++)
+	for (i = 0; i < SessionPoolPorts; i++)
 	{
-		if (ListenSocket[i] != PGINVALID_SOCKET)
+		for (j = 0; j < MAXLISTEN; j++)
 		{
-			StreamClose(ListenSocket[i]);
-			ListenSocket[i] = PGINVALID_SOCKET;
+			if (ListenSocket[i][j] != PGINVALID_SOCKET)
+			{
+				StreamClose(ListenSocket[i][j]);
+				ListenSocket[i][j] = PGINVALID_SOCKET;
+			}
 		}
 	}
 
@@ -3236,6 +3261,28 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL
+		&& bp->session_send_sock != PGINVALID_SOCKET)
+	{
+		PostmasterSessionPool* pool = &SessionPools[bp->session_pool_id];
+		Assert(pool->n_workers > bp->worker_id && pool->workers[bp->worker_id] == bp);
+		if (--pool->n_workers != 0)
+		{
+			pool->workers[bp->worker_id] = pool->workers[pool->n_workers];
+			pool->rr_index %= pool->n_workers;
+		}
+		closesocket(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3359,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3461,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4013,16 +4058,35 @@ TerminateChildren(int signal)
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
 static int
-BackendStartup(Port *port)
+BackendStartup(Port *port, int session_pool_id)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	pgsocket        session_pipe[2];
+	PostmasterSessionPool* pool = &SessionPools[session_pool_id];
+	bool dedicated_backend = SessionPoolSize == 0 || (SessionPoolPorts != 0 && session_pool_id == 0);
+
+	if (!dedicated_backend && pool->n_workers >= SessionPoolSize)
+	{
+		Backend* worker = pool->workers[pool->rr_index];
+		/* In case of session pooling instead of spawning new backend open new session at one of the existed backends. */
+		elog(DEBUG2, "Start new session in pool %d for socket %d at backend %d",
+			 session_pool_id, port->sock, worker->pid);
+		/* Send connection socket to the worker backend */
+		if (pg_send_sock(worker->session_send_sock, port->sock, worker->pid) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+
+		pool->rr_index = (pool->rr_index + 1) % pool->n_workers; /* round-robin */
+
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
 	 * handle failure cleanly.
 	 */
-	bn = (Backend *) malloc(sizeof(Backend));
+	bn = (Backend *) calloc(1, sizeof(Backend));
 	if (!bn)
 	{
 		ereport(LOG,
@@ -4030,7 +4094,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4126,28 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (!dedicated_backend)
+	{
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+#ifdef WIN32
+		SessionPoolSock = session_pipe[0];
+#endif
+	}
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (!dedicated_backend)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4189,22 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
 
+	if (!dedicated_backend)
+	{
+		bn->session_send_sock = session_pipe[1]; /* Use this socket for sending client session socket descriptor */
+		closesocket(session_pipe[0]); /* Close unused end of the pipe */
+		if (pool->workers == NULL)
+			pool->workers = (Backend**)malloc(sizeof(Backend*)*SessionPoolSize);
+		bn->worker_id = pool->n_workers++;
+		pool->workers[bn->worker_id] = bn;
+		bn->session_pool_id = session_pool_id;
+		elog(DEBUG2, "Start %d-th worker in session pool %d pid %d",
+			 pool->n_workers, session_pool_id, pid);
+	}
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4391,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
@@ -6033,6 +6125,9 @@ save_backend_variables(BackendParameters *param, Port *port,
 	if (!write_inheritable_socket(&param->portsocket, port->sock, childPid))
 		return false;
 
+	if (!write_inheritable_socket(&param->sessionsocket, SessionPoolSock, childPid))
+		return false;
+
 	strlcpy(param->DataDir, DataDir, MAXPGPATH);
 
 	memcpy(&param->ListenSocket, &ListenSocket, sizeof(ListenSocket));
@@ -6265,6 +6360,7 @@ restore_backend_variables(BackendParameters *param, Port *port)
 {
 	memcpy(port, &param->port, sizeof(Port));
 	read_inheritable_socket(&port->sock, &param->portsocket);
+	read_inheritable_socket(&SessionPoolSock, &param->sessionsocket);
 
 	SetDataDir(param->DataDir);
 
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..0f62792 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,9 +130,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,9 +669,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (latch)
 	{
@@ -690,8 +694,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,15 +733,38 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+			WaitEventAdjustWin32(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,9 +812,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -827,14 +865,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -865,9 +922,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -880,7 +953,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -897,8 +970,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1296,7 +1369,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..a24c43a 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -40,6 +40,7 @@
 #include "access/printtup.h"
 #include "access/xact.h"
 #include "catalog/pg_type.h"
+#include "catalog/namespace.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
 #include "libpq/libpq.h"
@@ -75,9 +76,9 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
-
 /* ----------------
  *		global variables
  * ----------------
@@ -98,6 +99,10 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */
+pgsocket    SessionPoolSock = PGINVALID_SOCKET;
+/* Pointer to the active session */
+SessionContext* ActiveSession;
 
 
 /* ----------------
@@ -169,6 +174,12 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet*   SessionPool;    /* Set of all sessions sockets */
+static int64           SessionCount;   /* Number of sessions */
+static Port*           BackendPort;    /* Reference to the original port of this backend created when this backend was launched.
+										* Session using this port may be already terminated, but since it is allocated in TopMemoryContext,
+										* its content is still valid and is used as template for ports of new sessions */
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +205,27 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return pstrdup(buf);
+}
+
+/*
+ * Free all memory associated with session and delete session object itself
+ */
+static void DeleteSession(SessionContext* session)
+{
+	elog(DEBUG1, "Delete session %p, id=%s,  memory context=%p", session, session->id, session->memory);
+	RestoreSessionGUCs(session);
+	ReleaseSessionGUCs(session);
+	MemoryContextDelete(session->memory);
+	free(session);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1232,6 +1264,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1541,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2369,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3653,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3703,21 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session for this backend in case of session pooling */
+	if (SessionPoolSize != 0)
+	{
+		MemoryContext oldcontext;
+		ActiveSession = (SessionContext*)calloc(1, sizeof(SessionContext));
+		ActiveSession->memory = AllocSetContextCreate(TopMemoryContext,
+													   "SessionMemoryContext",
+													   ALLOCSET_DEFAULT_SIZES);
+		oldcontext = MemoryContextSwitchTo(ActiveSession->memory);
+		ActiveSession->id = CreateSessionId();
+		ActiveSession->port = MyProcPort;
+		BackendPort = MyProcPort;
+		MemoryContextSwitchTo(oldcontext);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3847,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4133,152 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling, rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				if (SessionPool == NULL)
+				{
+					/* Construct wait event set if not constructed yet */
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions+3);
+					/* Add event to detect postmaster death */
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, ActiveSession);
+					/* Add event for backends latch */
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, ActiveSession);
+					/* Add event for accepting new sessions */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, SessionPoolSock, NULL, ActiveSession);
+					/* Add event for current session */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, ActiveSession);
+				}
+			  ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */ 
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client.fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */ 
+					int		 status;
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock == PGINVALID_SOCKET)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					session = (SessionContext*)calloc(1, sizeof(SessionContext));
+					session->memory = AllocSetContextCreate(TopMemoryContext,
+															"SessionMemoryContext",
+															ALLOCSET_DEFAULT_SIZES);
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					port = palloc(sizeof(Port));
+					memcpy(port, BackendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port->sock = sock;
+					session->port = port;
+					session->id = CreateSessionId();
+
+					MyProcPort = port;
+					status = ProcessStartupPacket(port, false, session->memory);
+					MemoryContextSwitchTo(oldcontext);
+
+					/*
+					 * TODO: Currently we assume that all sessions are accessing the same database under the same user.
+					 * Just report an error if  it is not true
+					 */
+					if (strcmp(port->database_name, MyProcPort->database_name) ||
+						strcmp(port->user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port->database_name, port->user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK_)
+					{
+						if (AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, session) < 0)
+						{
+							elog(WARNING, "Too much pooled sessions: %d", MaxSessions);
+						}
+						else
+						{
+							elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+								 (int)sock, MyProcPid, port->database_name, port->user_name);
+							RestoreSessionGUCs(ActiveSession);
+							ActiveSession = session;
+							SetCurrentStatementStartTimestamp();
+							StartTransactionCommand();
+							PerformAuthentication(MyProcPort);
+							CommitTransactionCommand();
+
+							/*
+							 * Send GUC options to the client
+							 */
+							BeginReportingGUCOptions();
+
+							/*
+							 * Send this backend's cancellation info to the frontend.
+							 */
+							pq_beginmessage(&buf, 'K');
+							pq_sendint32(&buf, (int32) MyProcPid);
+							pq_sendint32(&buf, (int32) MyCancelKey);
+							pq_endmessage(&buf);
+
+							/* Need not flush since ReadyForQuery will do it. */
+							send_ready_for_query = true;
+							continue;
+						}
+					}
+					/* Error while processing of startup package
+					 * Reject this session and return back to listening sockets
+					 */
+					DeleteSession(session);
+					elog(LOG, "Session startup failed");
+					closesocket(sock);
+					goto ChooseSession;
+				}
+				else
+				{
+					SessionContext* newSession = (SessionContext*)ready_client.user_data;
+					if (ActiveSession != newSession)
+					{
+						elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+						RestoreSessionGUCs(ActiveSession);
+						ActiveSession = newSession;
+						RestoreSessionGUCs(ActiveSession);
+						MyProcPort = ActiveSession->port;
+						SetTempNamespaceState(ActiveSession->tempNamespace, ActiveSession->tempToastNamespace);
+					}
+				}
+			}
 		}
 
 		/*
@@ -4350,6 +4560,39 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+					pq_getmsgend(&input_message);
+					if (pq_is_reading_msg())
+						pq_endmsgread();
+
+					closesocket(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+					MyProcPort = NULL;
+
+					if (ActiveSession)
+					{
+						DropSessionPreparedStatements(ActiveSession->id);
+						DeleteSession(ActiveSession);
+						ActiveSession = NULL;
+					}
+					whereToSendOutput = DestRemote;
+					/* Need to perform rescheduling to some other session or accept new session */
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..14fd972 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,10 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
+int			SessionPoolPorts = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..f82c2cb 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,44 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_ports", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+		 gettext_noop("Number of session ports = number of session pools."),
+		 gettext_noop("Number of extra parts which PostgreSQL will listen to accept client session. Each such port has separate session pool."
+					  "It is intended that each port corresponds to some particular database/user combination, so that all backends in this session "
+					  "pool will handle connection accessing this database. If session_pool_port is non zero then postmaster will always spawn dedicated (non-pooling) "
+					  " backends at the main Postgres port. If session_pool_port is zero and session_pool_size is not zero, then sessions (pooled connection) will be also "
+					  "accepted at main port. Session pool ports are allocatged sequentially: if Postgres main port is 5432 and session_pool_ports is 2, "
+					  "then ports 5433 and 5434 will be used for connection pooling.")
+	    },
+		&SessionPoolPorts,
+		0, 0, MAX_SESSION_PORTS,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
@@ -5104,6 +5142,95 @@ NewGUCNestLevel(void)
 }
 
 /*
+ * Set GUCs for this session
+ */
+void
+RestoreSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	if (session == NULL)
+		return;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		void* old_extra = sg->var->extra;
+		sg->var->extra = sg->val.extra;
+		switch (sg->var->vartype)
+		{
+		  case PGC_BOOL:
+		  {
+			  struct config_bool *conf = (struct config_bool*)sg->var;
+			  bool oldval = *conf->variable;
+			  *conf->variable = sg->val.val.boolval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.boolval, sg->val.extra);
+			  sg->val.val.boolval = oldval;
+			  break;
+		  }
+		  case PGC_INT:
+		  {
+			  struct config_int *conf = (struct config_int*)sg->var;
+			  int oldval = *conf->variable;
+			  *conf->variable = sg->val.val.intval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.intval, sg->val.extra);
+			  sg->val.val.intval = oldval;
+			  break;
+		  }
+		  case PGC_REAL:
+		  {
+			  struct config_real *conf = (struct config_real*)sg->var;
+			  double oldval = *conf->variable;
+			  *conf->variable = sg->val.val.realval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.realval, sg->val.extra);
+			  sg->val.val.realval = oldval;
+			  break;
+		  }
+		  case PGC_STRING:
+		  {
+			  struct config_string *conf = (struct config_string*)sg->var;
+			  char* oldval = *conf->variable;
+			  *conf->variable = sg->val.val.stringval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.stringval, sg->val.extra);
+			  sg->val.val.stringval = oldval;
+			  break;
+		  }
+		  case PGC_ENUM:
+		  {
+			  struct config_enum *conf = (struct config_enum*)sg->var;
+			  int oldval = *conf->variable;
+			  *conf->variable = sg->val.val.enumval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.enumval, sg->val.extra);
+			  sg->val.val.enumval = oldval;
+			  break;
+		  }
+		}
+		sg->val.extra = old_extra;
+	}
+}
+
+/*
+ * Deallocate memory for session GUCs
+ */
+void
+ReleaseSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		if (sg->val.extra)
+			set_extra_field(sg->var, &sg->val.extra, NULL);
+		if (sg->var->vartype == PGC_STRING)
+		{
+			struct config_string* conf = (struct config_string*)sg->var;
+			set_string_field(conf, &sg->val.val.stringval, NULL);
+		}
+	}
+}
+
+/*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
  * transient assignment to some GUC variables.  (The name is thus a bit of
@@ -5172,7 +5299,42 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 				else if (stack->state == GUC_SET)
 				{
 					/* we keep the current active value */
-					discard_stack_value(gconf, &stack->prior);
+					if (ActiveSession)
+					{
+						SessionGUC* sg;
+						for (sg = ActiveSession->gucs; sg != NULL && sg->var != gconf; sg = sg->next);
+						if (sg == NULL)
+						{
+							sg = MemoryContextAllocZero(ActiveSession->memory,
+														sizeof(SessionGUC));
+							sg->var = gconf;
+							sg->next = ActiveSession->gucs;
+							ActiveSession->gucs = sg;
+						}
+						switch (gconf->vartype)
+						{
+						  case PGC_BOOL:
+							sg->val.val.boolval = stack->prior.val.boolval;
+							break;
+						  case PGC_INT:
+							sg->val.val.intval = stack->prior.val.intval;
+							break;
+						  case PGC_REAL:
+							sg->val.val.realval = stack->prior.val.realval;
+							break;
+						  case PGC_STRING:
+							sg->val.val.stringval = stack->prior.val.stringval;
+							break;
+						  case PGC_ENUM:
+							sg->val.val.enumval = stack->prior.val.enumval;
+							break;
+						}
+						sg->val.extra = stack->prior.extra;
+					}
+					else
+					{
+						discard_stack_value(gconf, &stack->prior);
+					}
 				}
 				else			/* must be GUC_LOCAL */
 					restorePrior = true;
@@ -5197,8 +5359,8 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 
 					case GUC_SET:
 						/* next level always becomes SET */
-						discard_stack_value(gconf, &stack->prior);
-						if (prev->state == GUC_SET_LOCAL)
+					    discard_stack_value(gconf, &stack->prior);
+					    if (prev->state == GUC_SET_LOCAL)
 							discard_stack_value(gconf, &prev->masked);
 						prev->state = GUC_SET;
 						break;
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..66d7e33 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,9 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int SessionPoolPorts;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -175,6 +178,8 @@ extern char pkglib_path[];
 extern char postgres_exec_path[];
 #endif
 
+#define MAX_SESSION_PORTS	8
+
 /*
  * done in storage/backendid.h for now.
  *
@@ -420,6 +425,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..8a0ac98 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index d31c28f..e667434 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -447,6 +447,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -456,6 +457,7 @@ int			pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen);
 int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout);
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 const char *pgwin32_socket_strerror(int err);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 5c19a61..11eded3 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -21,6 +21,7 @@
 #include "storage/lock.h"
 #include "storage/pg_sema.h"
 #include "storage/proclist_types.h"
+#include "utils/guc_tables.h"
 
 /*
  * Each backend advertises up to PGPROC_MAX_CACHED_SUBXIDS TransactionIds
@@ -273,6 +274,29 @@ extern PGDLLIMPORT PROC_HDR *ProcGlobal;
 
 extern PGPROC *PreparedXactProcs;
 
+typedef struct SessionGUC
+{
+	struct SessionGUC* next;
+	config_var_value   val;
+	struct config_generic *var;
+} SessionGUC;
+
+/*
+ * Information associated with client session
+ */
+typedef struct SessionContext
+{
+	MemoryContext memory; /* memory context used for global session data (replacement of TopMemoryContext) */
+	struct Port* port;           /* connection port */
+	char*        id;             /* session identifier used to construct unique prepared statement names */
+	Oid          tempNamespace;  /* temporary namespace */
+	Oid          tempToastNamespace;  /* temporary toast namespace */
+	SessionGUC*  gucs;
+} SessionContext;
+
+
+extern PGDLLIMPORT SessionContext *ActiveSession; 
+
 /* Accessor for PGPROC given a pgprocno. */
 #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])
 
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..191eeaa 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -34,6 +34,7 @@ extern CommandDest whereToSendOutput;
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 77daa5a..86e89e8 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -394,6 +394,12 @@ extern Size EstimateGUCStateSpace(void);
 extern void SerializeGUCState(Size maxsize, char *start_address);
 extern void RestoreGUCState(void *gucstate);
 
+/* Session polling support function */
+struct SessionContext;
+extern void RestoreSessionGUCs(struct SessionContext* session);
+extern void ReleaseSessionGUCs(struct SessionContext* session);
+
+
 /* Support for messages reported from GUC check hooks */
 
 extern PGDLLIMPORT char *GUC_check_errmsg_string;
#52Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#51)
1 attachment(s)
Re: Built-in connection pooling

On 06.04.2018 20:00, Konstantin Knizhnik wrote:

Attached please find new version of the patch with  several bug fixes
+ support of more than one session pools associated with different ports.
Now it is possible to make postmaster listen several ports for
accepting pooled connections, while leaving main Postgres port for
dedicated backends.
Each session pool is intended to be used for particular database/user
combination.

Sorry, wrong patch was attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-8.patchtext/x-patch; name=session_pool-8.patchDownload
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 93c4bbf..dfc072c 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -194,6 +194,7 @@ char	   *namespace_search_path = NULL;
 /* Local functions */
 static void recomputeNamespacePath(void);
 static void InitTempTableNamespace(void);
+static Oid  GetTempTableNamespace(void);
 static void RemoveTempRelations(Oid tempNamespaceId);
 static void RemoveTempRelationsCallback(int code, Datum arg);
 static void NamespaceCallback(Datum arg, int cacheid, uint32 hashvalue);
@@ -441,9 +442,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (strcmp(newRelation->schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(newRelation->schemaname, false);
@@ -452,9 +451,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 	else if (newRelation->relpersistence == RELPERSISTENCE_TEMP)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 	else
 	{
@@ -463,8 +460,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -2902,9 +2898,7 @@ LookupCreationNamespace(const char *nspname)
 	if (strcmp(nspname, "pg_temp") == 0)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 
 	namespaceId = get_namespace_oid(nspname, false);
@@ -2967,9 +2961,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (strcmp(schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(schemaname, false);
@@ -2982,8 +2974,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -3250,8 +3241,11 @@ void
 SetTempNamespaceState(Oid tempNamespaceId, Oid tempToastNamespaceId)
 {
 	/* Worker should not have created its own namespaces ... */
-	Assert(myTempNamespace == InvalidOid);
-	Assert(myTempToastNamespace == InvalidOid);
+	if (!ActiveSession)
+	{
+		Assert(myTempNamespace == InvalidOid);
+		Assert(myTempToastNamespace == InvalidOid);
+	}
 	Assert(myTempNamespaceSubID == InvalidSubTransactionId);
 
 	/* Assign same namespace OIDs that leader has */
@@ -3771,6 +3765,22 @@ recomputeNamespacePath(void)
 	list_free(oidlist);
 }
 
+static Oid
+GetTempTableNamespace(void)
+{
+	if (ActiveSession)
+	{
+		if (!OidIsValid(ActiveSession->tempNamespace))
+			InitTempTableNamespace();
+	}
+	else
+	{
+		if (!OidIsValid(myTempNamespace))
+			InitTempTableNamespace();
+	}
+	return myTempNamespace;
+}
+
 /*
  * InitTempTableNamespace
  *		Initialize temp table namespace on first use in a particular backend
@@ -3782,8 +3792,6 @@ InitTempTableNamespace(void)
 	Oid			namespaceId;
 	Oid			toastspaceId;
 
-	Assert(!OidIsValid(myTempNamespace));
-
 	/*
 	 * First, do permission check to see if we are authorized to make temp
 	 * tables.  We use a nonstandard error message here since "databasename:
@@ -3822,7 +3830,10 @@ InitTempTableNamespace(void)
 				(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
 				 errmsg("cannot create temporary tables during a parallel operation")));
 
-	snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d_%s", MyBackendId, ActiveSession->id);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
 
 	namespaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(namespaceId))
@@ -3854,8 +3865,10 @@ InitTempTableNamespace(void)
 	 * it. (We assume there is no need to clean it out if it does exist, since
 	 * dropping a parent table should make its toast table go away.)
 	 */
-	snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d",
-			 MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d_%s", MyBackendId, ActiveSession->id);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d", MyBackendId);
 
 	toastspaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(toastspaceId))
@@ -3873,7 +3886,11 @@ InitTempTableNamespace(void)
 	 */
 	myTempNamespace = namespaceId;
 	myTempToastNamespace = toastspaceId;
-
+	if (ActiveSession)
+	{
+		ActiveSession->tempNamespace = namespaceId;
+		ActiveSession->tempToastNamespace = toastspaceId;
+	}
 	/* It should not be done already. */
 	AssertState(myTempNamespaceSubID == InvalidSubTransactionId);
 	myTempNamespaceSubID = GetCurrentSubTransactionId();
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index cff49ba..b728ab1 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -25,6 +25,7 @@
 #include "access/xloginsert.h"
 #include "access/xlogutils.h"
 #include "catalog/catalog.h"
+#include "catalog/namespace.h"
 #include "catalog/storage.h"
 #include "catalog/storage_xlog.h"
 #include "storage/freespace.h"
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..8e8a737 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -813,3 +813,32 @@ build_regtype_array(Oid *param_types, int num_params)
 	result = construct_array(tmp_ary, num_params, REGTYPEOID, 4, true, 'i');
 	return PointerGetDatum(result);
 }
+
+/*
+ * Drop all statements prepared in the specified session.
+ */
+void
+DropSessionPreparedStatements(char const* sessionId)
+{
+	HASH_SEQ_STATUS seq;
+	PreparedStatement *entry;
+	size_t idLen = strlen(sessionId);
+
+	/* nothing cached */
+	if (!prepared_queries)
+		return;
+
+	/* walk over cache */
+	hash_seq_init(&seq, prepared_queries);
+	while ((entry = hash_seq_search(&seq)) != NULL)
+	{
+		if (strncmp(entry->stmt_name, sessionId, idLen) == 0 && entry->stmt_name[idLen] == '.')
+		{
+			/* Release the plancache entry */
+			DropCachedPlan(entry->plansource);
+
+			/* Now we can remove the hash table entry */
+			hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		}
+	}
+}
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..7f40edb 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1029,6 +1029,17 @@ pq_peekbyte(void)
 }
 
 /* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return PqRecvLength - PqRecvPointer;
+}
+
+/* --------------------------------
  *		pq_getbyte_if_available - get a single byte from connection,
  *			if available
  *
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..5f3f929
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,151 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+	{
+		return -1;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..7fd901f 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -726,3 +726,65 @@ pgwin32_socket_strerror(int err)
 	}
 	return wserrbuf;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index f3ddf82..707b880 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -169,6 +169,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -179,6 +180,8 @@ typedef struct bkend
 	bool		dead_end;		/* is it going to send an error and quit? */
 	bool		bgworker_notify;	/* gets bgworker start/stop notifications */
 	dlist_node	elem;			/* list link in BackendList */
+	int         session_pool_id;    /* identifier of backends session pool */
+	int         worker_id;      /* identifier of worker within session pool */
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
@@ -189,7 +192,14 @@ static Backend *ShmemBackendArray;
 
 BackgroundWorker *MyBgworkerEntry = NULL;
 
+typedef struct PostmasterSessionPool
+{
+	Backend** workers; /* pool backends */
+	int n_workers;    /* number of launched worker backends in this pool so far */
+	int rr_index;     /* index of current backends used to implement round-robin distribution of sessions through backends. */
+} PostmasterSessionPool;
 
+static PostmasterSessionPool SessionPools[MAX_SESSION_PORTS];
 
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
@@ -213,7 +223,7 @@ int			ReservedBackends;
 
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
-static pgsocket ListenSocket[MAXLISTEN];
+static pgsocket ListenSocket[MAX_SESSION_PORTS][MAXLISTEN];
 
 /*
  * Set by the -o option
@@ -411,8 +421,7 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
+static int	BackendStartup(Port *port, int session_pool_id);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
@@ -485,8 +494,9 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket sessionsocket;
 	char		DataDir[MAXPGPATH];
-	pgsocket	ListenSocket[MAXLISTEN];
+	pgsocket	ListenSocket[MAX_SESSION_PORTS][MAXLISTEN];
 	int32		MyCancelKey;
 	int			MyPMChildSlot;
 #ifndef WIN32
@@ -577,7 +587,7 @@ PostmasterMain(int argc, char *argv[])
 	int			status;
 	char	   *userDoption = NULL;
 	bool		listen_addr_saved = false;
-	int			i;
+	int			i, j;
 	char	   *output_config_variable = NULL;
 
 	MyProcPid = PostmasterPid = getpid();
@@ -990,8 +1000,9 @@ PostmasterMain(int argc, char *argv[])
 	 * First, mark them all closed, and set up an on_proc_exit function that's
 	 * charged with closing the sockets again at postmaster shutdown.
 	 */
-	for (i = 0; i < MAXLISTEN; i++)
-		ListenSocket[i] = PGINVALID_SOCKET;
+	for (i = 0; i <= SessionPoolPorts; i++)
+		for (j = 0; j < MAXLISTEN; j++)
+			ListenSocket[i][j] = PGINVALID_SOCKET;
 
 	on_proc_exit(CloseServerPorts, 0);
 
@@ -1019,33 +1030,35 @@ PostmasterMain(int argc, char *argv[])
 		{
 			char	   *curhost = (char *) lfirst(l);
 
-			if (strcmp(curhost, "*") == 0)
-				status = StreamServerPort(AF_UNSPEC, NULL,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-			else
-				status = StreamServerPort(AF_UNSPEC, curhost,
-										  (unsigned short) PostPortNumber,
-										  NULL,
-										  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i <= SessionPoolPorts; i++)
 			{
-				success++;
-				/* record the first successful host addr in lockfile */
-				if (!listen_addr_saved)
+				if (strcmp(curhost, "*") == 0)
+					status = StreamServerPort(AF_UNSPEC, NULL,
+											  (unsigned short) PostPortNumber + i,
+											  NULL,
+											  ListenSocket[i], MAXLISTEN);
+				else
+					status = StreamServerPort(AF_UNSPEC, curhost,
+											  (unsigned short) PostPortNumber + i,
+											  NULL,
+											  ListenSocket[i], MAXLISTEN);
+
+				if (status == STATUS_OK)
 				{
-					AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
-					listen_addr_saved = true;
+					success++;
+					/* record the first successful host addr in lockfile */
+					if (!listen_addr_saved)
+					{
+						AddToDataDirLockFile(LOCK_FILE_LINE_LISTEN_ADDR, curhost);
+						listen_addr_saved = true;
+					}
 				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create listen socket for \"%s\"",
+									curhost)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create listen socket for \"%s\"",
-								curhost)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any TCP/IP sockets")));
@@ -1056,7 +1069,7 @@ PostmasterMain(int argc, char *argv[])
 
 #ifdef USE_BONJOUR
 	/* Register for Bonjour only if we opened TCP socket(s) */
-	if (enable_bonjour && ListenSocket[0] != PGINVALID_SOCKET)
+	if (enable_bonjour && ListenSocket[0][0] != PGINVALID_SOCKET)
 	{
 		DNSServiceErrorType err;
 
@@ -1117,24 +1130,26 @@ PostmasterMain(int argc, char *argv[])
 		{
 			char	   *socketdir = (char *) lfirst(l);
 
-			status = StreamServerPort(AF_UNIX, NULL,
-									  (unsigned short) PostPortNumber,
-									  socketdir,
-									  ListenSocket, MAXLISTEN);
-
-			if (status == STATUS_OK)
+			for (i = 0; i <= SessionPoolPorts; i++)
 			{
-				success++;
-				/* record the first successful Unix socket in lockfile */
-				if (success == 1)
-					AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				status = StreamServerPort(AF_UNIX, NULL,
+										  (unsigned short) PostPortNumber + i,
+										  socketdir,
+										  ListenSocket[i], MAXLISTEN);
+
+				if (status == STATUS_OK)
+				{
+					success++;
+					/* record the first successful Unix socket in lockfile */
+					if (success == 1)
+						AddToDataDirLockFile(LOCK_FILE_LINE_SOCKET_DIR, socketdir);
+				}
+				else
+					ereport(WARNING,
+							(errmsg("could not create Unix-domain socket in directory \"%s\"",
+									socketdir)));
 			}
-			else
-				ereport(WARNING,
-						(errmsg("could not create Unix-domain socket in directory \"%s\"",
-								socketdir)));
 		}
-
 		if (!success && elemlist != NIL)
 			ereport(FATAL,
 					(errmsg("could not create any Unix-domain sockets")));
@@ -1147,7 +1162,7 @@ PostmasterMain(int argc, char *argv[])
 	/*
 	 * check that we have some socket to listen on
 	 */
-	if (ListenSocket[0] == PGINVALID_SOCKET)
+	if (ListenSocket[0][0] == PGINVALID_SOCKET)
 		ereport(FATAL,
 				(errmsg("no socket created for listening")));
 
@@ -1379,7 +1394,7 @@ PostmasterMain(int argc, char *argv[])
 static void
 CloseServerPorts(int status, Datum arg)
 {
-	int			i;
+	int			i, j;
 
 	/*
 	 * First, explicitly close all the socket FDs.  We used to just let this
@@ -1387,12 +1402,15 @@ CloseServerPorts(int status, Datum arg)
 	 * before we remove the postmaster.pid lockfile; otherwise there's a race
 	 * condition if a new postmaster wants to re-use the TCP port number.
 	 */
-	for (i = 0; i < MAXLISTEN; i++)
+	for (i = 0; i <= SessionPoolPorts; i++)
 	{
-		if (ListenSocket[i] != PGINVALID_SOCKET)
+		for (j = 0; j < MAXLISTEN; j++)
 		{
-			StreamClose(ListenSocket[i]);
-			ListenSocket[i] = PGINVALID_SOCKET;
+			if (ListenSocket[i][j] != PGINVALID_SOCKET)
+			{
+				StreamClose(ListenSocket[i][j]);
+				ListenSocket[i][j] = PGINVALID_SOCKET;
+			}
 		}
 	}
 
@@ -1741,27 +1759,30 @@ ServerLoop(void)
 		 */
 		if (selres > 0)
 		{
-			int			i;
+			int			i, j;
 
-			for (i = 0; i < MAXLISTEN; i++)
+			for (i = 0; i <= SessionPoolPorts; i++)
 			{
-				if (ListenSocket[i] == PGINVALID_SOCKET)
-					break;
-				if (FD_ISSET(ListenSocket[i], &rmask))
+				for (j = 0; j < MAXLISTEN; j++)
 				{
-					Port	   *port;
-
-					port = ConnCreate(ListenSocket[i]);
-					if (port)
+					if (ListenSocket[i][j] == PGINVALID_SOCKET)
+						break;
+					if (FD_ISSET(ListenSocket[i][j], &rmask))
 					{
-						BackendStartup(port);
-
-						/*
-						 * We no longer need the open socket or port structure
-						 * in this process
-						 */
-						StreamClose(port->sock);
-						ConnFree(port);
+						Port	   *port;
+
+						port = ConnCreate(ListenSocket[i][j]);
+						if (port)
+						{
+							BackendStartup(port, i);
+
+							/*
+							 * We no longer need the open socket or port structure
+							 * in this process
+							 */
+							StreamClose(port->sock);
+							ConnFree(port);
+						}
 					}
 				}
 			}
@@ -1913,20 +1934,23 @@ static int
 initMasks(fd_set *rmask)
 {
 	int			maxsock = -1;
-	int			i;
+	int			i, j;
 
 	FD_ZERO(rmask);
 
-	for (i = 0; i < MAXLISTEN; i++)
+	for (i = 0; i <= SessionPoolPorts; i++)
 	{
-		int			fd = ListenSocket[i];
+		for (j = 0; j < MAXLISTEN; j++)
+		{
+			int			fd = ListenSocket[i][j];
 
-		if (fd == PGINVALID_SOCKET)
-			break;
-		FD_SET(fd, rmask);
+			if (fd == PGINVALID_SOCKET)
+				break;
+			FD_SET(fd, rmask);
 
-		if (fd > maxsock)
-			maxsock = fd;
+			if (fd > maxsock)
+				maxsock = fd;
+		}
 	}
 
 	return maxsock + 1;
@@ -1944,8 +1968,8 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx)
 {
 	int32		len;
 	void	   *buf;
@@ -1978,7 +2002,6 @@ ProcessStartupPacket(Port *port, bool SSLdone)
 				 errmsg("invalid length of startup packet")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Allocate at least the size of an old-style startup packet, plus one
 	 * extra byte, and make sure all are zeroes.  This ensures we will have
@@ -2043,7 +2066,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx);
 	}
 
 	/* Could add additional special packet types here */
@@ -2073,7 +2096,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2099,7 +2122,6 @@ retry1:
 			if (valoffset >= len)
 				break;			/* missing value, will complain below */
 			valptr = ((char *) buf) + valoffset;
-
 			if (strcmp(nameptr, "database") == 0)
 				port->database_name = pstrdup(valptr);
 			else if (strcmp(nameptr, "user") == 0)
@@ -2449,7 +2471,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -2498,7 +2520,7 @@ ConnFree(Port *conn)
 void
 ClosePostmasterPorts(bool am_syslogger)
 {
-	int			i;
+	int			i, j;
 
 #ifndef WIN32
 
@@ -2515,12 +2537,15 @@ ClosePostmasterPorts(bool am_syslogger)
 #endif
 
 	/* Close the listen sockets */
-	for (i = 0; i < MAXLISTEN; i++)
+	for (i = 0; i < SessionPoolPorts; i++)
 	{
-		if (ListenSocket[i] != PGINVALID_SOCKET)
+		for (j = 0; j < MAXLISTEN; j++)
 		{
-			StreamClose(ListenSocket[i]);
-			ListenSocket[i] = PGINVALID_SOCKET;
+			if (ListenSocket[i][j] != PGINVALID_SOCKET)
+			{
+				StreamClose(ListenSocket[i][j]);
+				ListenSocket[i][j] = PGINVALID_SOCKET;
+			}
 		}
 	}
 
@@ -3236,6 +3261,28 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory
+ */
+static void UnlinkBackend(Backend* bp)
+{
+	if (bp->bkend_type == BACKEND_TYPE_NORMAL
+		&& bp->session_send_sock != PGINVALID_SOCKET)
+	{
+		PostmasterSessionPool* pool = &SessionPools[bp->session_pool_id];
+		Assert(pool->n_workers > bp->worker_id && pool->workers[bp->worker_id] == bp);
+		if (--pool->n_workers != 0)
+		{
+			pool->workers[bp->worker_id] = pool->workers[pool->n_workers];
+			pool->rr_index %= pool->n_workers;
+		}
+		closesocket(bp->session_send_sock);
+		elog(DEBUG2, "Cleanup backend %d", bp->pid);
+	}
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3312,8 +3359,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			break;
 		}
 	}
@@ -3415,8 +3461,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			UnlinkBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -4013,16 +4058,35 @@ TerminateChildren(int signal)
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
 static int
-BackendStartup(Port *port)
+BackendStartup(Port *port, int session_pool_id)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	pgsocket        session_pipe[2];
+	PostmasterSessionPool* pool = &SessionPools[session_pool_id];
+	bool dedicated_backend = SessionPoolSize == 0 || (SessionPoolPorts != 0 && session_pool_id == 0);
+
+	if (!dedicated_backend && pool->n_workers >= SessionPoolSize)
+	{
+		Backend* worker = pool->workers[pool->rr_index];
+		/* In case of session pooling instead of spawning new backend open new session at one of the existed backends. */
+		elog(DEBUG2, "Start new session in pool %d for socket %d at backend %d",
+			 session_pool_id, port->sock, worker->pid);
+		/* Send connection socket to the worker backend */
+		if (pg_send_sock(worker->session_send_sock, port->sock, worker->pid) < 0)
+			elog(FATAL, "Failed to send session socket: %m");
+
+		pool->rr_index = (pool->rr_index + 1) % pool->n_workers; /* round-robin */
+
+		return STATUS_OK;
+	}
+
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
 	 * handle failure cleanly.
 	 */
-	bn = (Backend *) malloc(sizeof(Backend));
+	bn = (Backend *) calloc(1, sizeof(Backend));
 	if (!bn)
 	{
 		ereport(LOG,
@@ -4030,7 +4094,6 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
-
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
 	 * backend will have its own copy in the forked-off process' value of
@@ -4063,12 +4126,28 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (!dedicated_backend)
+	{
+		if (socketpair(AF_UNIX, SOCK_DGRAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+#ifdef WIN32
+		SessionPoolSock = session_pipe[0];
+#endif
+	}
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		if (!dedicated_backend)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4110,9 +4189,22 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
 	dlist_push_head(&BackendList, &bn->elem);
 
+	if (!dedicated_backend)
+	{
+		bn->session_send_sock = session_pipe[1]; /* Use this socket for sending client session socket descriptor */
+		closesocket(session_pipe[0]); /* Close unused end of the pipe */
+		if (pool->workers == NULL)
+			pool->workers = (Backend**)malloc(sizeof(Backend*)*SessionPoolSize);
+		bn->worker_id = pool->n_workers++;
+		pool->workers[bn->worker_id] = bn;
+		bn->session_pool_id = session_pool_id;
+		elog(DEBUG2, "Start %d-th worker in session pool %d pid %d",
+			 pool->n_workers, session_pool_id, pid);
+	}
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4299,7 +4391,7 @@ BackendInitialize(Port *port)
 	 * Receive the startup packet (which might turn out to be a cancel request
 	 * packet).
 	 */
-	status = ProcessStartupPacket(port, false);
+	status = ProcessStartupPacket(port, false, TopMemoryContext);
 
 	/*
 	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
@@ -6033,6 +6125,9 @@ save_backend_variables(BackendParameters *param, Port *port,
 	if (!write_inheritable_socket(&param->portsocket, port->sock, childPid))
 		return false;
 
+	if (!write_inheritable_socket(&param->sessionsocket, SessionPoolSock, childPid))
+		return false;
+
 	strlcpy(param->DataDir, DataDir, MAXPGPATH);
 
 	memcpy(&param->ListenSocket, &ListenSocket, sizeof(ListenSocket));
@@ -6265,6 +6360,7 @@ restore_backend_variables(BackendParameters *param, Port *port)
 {
 	memcpy(port, &param->port, sizeof(Port));
 	read_inheritable_socket(&port->sock, &param->portsocket);
+	read_inheritable_socket(&SessionPoolSock, &param->sessionsocket);
 
 	SetDataDir(param->DataDir);
 
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index e6706f7..0f62792 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,9 +130,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,9 +669,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (latch)
 	{
@@ -690,8 +694,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,15 +733,38 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+			WaitEventAdjustWin32(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,9 +812,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -827,14 +865,33 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -865,9 +922,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -880,7 +953,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -897,8 +970,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1296,7 +1369,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..08e18e4 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -40,6 +40,7 @@
 #include "access/printtup.h"
 #include "access/xact.h"
 #include "catalog/pg_type.h"
+#include "catalog/namespace.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
 #include "libpq/libpq.h"
@@ -75,9 +76,9 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
 #include "mb/pg_wchar.h"
 
-
 /* ----------------
  *		global variables
  * ----------------
@@ -98,6 +99,10 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */
+pgsocket    SessionPoolSock = PGINVALID_SOCKET;
+/* Pointer to the active session */
+SessionContext* ActiveSession;
 
 
 /* ----------------
@@ -169,6 +174,12 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static WaitEventSet*   SessionPool;    /* Set of all sessions sockets */
+static int64           SessionCount;   /* Number of sessions */
+static Port*           BackendPort;    /* Reference to the original port of this backend created when this backend was launched.
+										* Session using this port may be already terminated, but since it is allocated in TopMemoryContext,
+										* its content is still valid and is used as template for ports of new sessions */
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -194,6 +205,27 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+/*
+ * Generate session ID unique within this backend
+ */
+static char* CreateSessionId(void)
+{
+	char buf[64];
+	pg_lltoa(++SessionCount, buf);
+	return pstrdup(buf);
+}
+
+/*
+ * Free all memory associated with session and delete session object itself
+ */
+static void DeleteSession(SessionContext* session)
+{
+	elog(DEBUG1, "Delete session %p, id=%s,  memory context=%p", session, session->id, session->memory);
+	RestoreSessionGUCs(session);
+	ReleaseSessionGUCs(session);
+	MemoryContextDelete(session->memory);
+	free(session);
+}
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1232,6 +1264,12 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	/*
 	 * Report query to various monitoring facilities.
 	 */
@@ -1503,6 +1541,12 @@ exec_bind_message(StringInfo input_message)
 	portal_name = pq_getmsgstring(input_message);
 	stmt_name = pq_getmsgstring(input_message);
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	ereport(DEBUG2,
 			(errmsg("bind %s to %s",
 					*portal_name ? portal_name : "<unnamed>",
@@ -2325,6 +2369,12 @@ exec_describe_statement_message(const char *stmt_name)
 	CachedPlanSource *psrc;
 	int			i;
 
+	if (ActiveSession && stmt_name[0] != '\0')
+	{
+		/* Make names of prepared statements unique for session in case of using internal session pool */
+		stmt_name = psprintf("%s.%s", ActiveSession->id, stmt_name);
+	}
+
 	/*
 	 * Start up a transaction command. (Note that this will normally change
 	 * current memory context.) Nothing happens if we are already in one.
@@ -3603,7 +3653,6 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
-
 /* ----------------------------------------------------------------
  * PostgresMain
  *	   postgres main loop -- all backends, interactive or otherwise start here
@@ -3654,6 +3703,21 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Assign session for this backend in case of session pooling */
+	if (SessionPoolSize != 0)
+	{
+		MemoryContext oldcontext;
+		ActiveSession = (SessionContext*)calloc(1, sizeof(SessionContext));
+		ActiveSession->memory = AllocSetContextCreate(TopMemoryContext,
+													   "SessionMemoryContext",
+													   ALLOCSET_DEFAULT_SIZES);
+		oldcontext = MemoryContextSwitchTo(ActiveSession->memory);
+		ActiveSession->id = CreateSessionId();
+		ActiveSession->port = MyProcPort;
+		BackendPort = MyProcPort;
+		MemoryContextSwitchTo(oldcontext);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3783,7 +3847,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -4069,6 +4133,152 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling, rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET && !IsTransactionState() && pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+				if (SessionPool == NULL)
+				{
+					/* Construct wait event set if not constructed yet */
+					SessionPool = CreateWaitEventSet(TopMemoryContext, MaxSessions+3);
+					/* Add event to detect postmaster death */
+					AddWaitEventToSet(SessionPool, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL, ActiveSession);
+					/* Add event for backends latch */
+					AddWaitEventToSet(SessionPool, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, ActiveSession);
+					/* Add event for accepting new sessions */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, SessionPoolSock, NULL, ActiveSession);
+					/* Add event for current session */
+					AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, MyProcPort->sock, NULL, ActiveSession);
+				}
+			  ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */ 
+				if (WaitEventSetWait(SessionPool, -1, &ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client.fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */ 
+					int		 status;
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock == PGINVALID_SOCKET)
+						elog(FATAL, "Failed to receive session socket: %m");
+
+					session = (SessionContext*)calloc(1, sizeof(SessionContext));
+					session->memory = AllocSetContextCreate(TopMemoryContext,
+															"SessionMemoryContext",
+															ALLOCSET_DEFAULT_SIZES);
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					port = palloc(sizeof(Port));
+					memcpy(port, BackendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be a cancel request
+					 * packet).
+					 */
+					port->sock = sock;
+					session->port = port;
+					session->id = CreateSessionId();
+
+					MyProcPort = port;
+					status = ProcessStartupPacket(port, false, session->memory);
+					MemoryContextSwitchTo(oldcontext);
+
+					/*
+					 * TODO: Currently we assume that all sessions are accessing the same database under the same user.
+					 * Just report an error if  it is not true
+					 */
+					if (strcmp(port->database_name, MyProcPort->database_name) ||
+						strcmp(port->user_name, MyProcPort->user_name))
+					{
+						elog(FATAL, "Failed to open session (dbname=%s user=%s) in backend %d (dbname=%s user=%s)",
+							 port->database_name, port->user_name,
+							 MyProcPid, MyProcPort->database_name, MyProcPort->user_name);
+					}
+					else if (status == STATUS_OK)
+					{
+						if (AddWaitEventToSet(SessionPool, WL_SOCKET_READABLE, sock, NULL, session) < 0)
+						{
+							elog(WARNING, "Too much pooled sessions: %d", MaxSessions);
+						}
+						else
+						{
+							elog(DEBUG2, "Start new session %d in backend %d for database %s user %s",
+								 (int)sock, MyProcPid, port->database_name, port->user_name);
+							RestoreSessionGUCs(ActiveSession);
+							ActiveSession = session;
+							SetCurrentStatementStartTimestamp();
+							StartTransactionCommand();
+							PerformAuthentication(MyProcPort);
+							CommitTransactionCommand();
+
+							/*
+							 * Send GUC options to the client
+							 */
+							BeginReportingGUCOptions();
+
+							/*
+							 * Send this backend's cancellation info to the frontend.
+							 */
+							pq_beginmessage(&buf, 'K');
+							pq_sendint32(&buf, (int32) MyProcPid);
+							pq_sendint32(&buf, (int32) MyCancelKey);
+							pq_endmessage(&buf);
+
+							/* Need not flush since ReadyForQuery will do it. */
+							send_ready_for_query = true;
+							continue;
+						}
+					}
+					/* Error while processing of startup package
+					 * Reject this session and return back to listening sockets
+					 */
+					DeleteSession(session);
+					elog(LOG, "Session startup failed");
+					closesocket(sock);
+					goto ChooseSession;
+				}
+				else
+				{
+					SessionContext* newSession = (SessionContext*)ready_client.user_data;
+					if (ActiveSession != newSession)
+					{
+						elog(DEBUG2, "Switch to session %d in backend %d", ready_client.fd, MyProcPid);
+						RestoreSessionGUCs(ActiveSession);
+						ActiveSession = newSession;
+						RestoreSessionGUCs(ActiveSession);
+						MyProcPort = ActiveSession->port;
+						SetTempNamespaceState(ActiveSession->tempNamespace, ActiveSession->tempToastNamespace);
+					}
+				}
+			}
 		}
 
 		/*
@@ -4350,6 +4560,39 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					DeleteWaitEventFromSet(SessionPool, MyProcPort->sock);
+					elog(DEBUG1, "Close session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+					pq_getmsgend(&input_message);
+					if (pq_is_reading_msg())
+						pq_endmsgread();
+
+					closesocket(MyProcPort->sock);
+					MyProcPort->sock = PGINVALID_SOCKET;
+					MyProcPort = NULL;
+
+					if (ActiveSession)
+					{
+						DropSessionPreparedStatements(ActiveSession->id);
+						DeleteSession(ActiveSession);
+						ActiveSession = NULL;
+					}
+					whereToSendOutput = DestRemote;
+					/* Need to perform rescheduling to some other session or accept new session */
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 54fa4a3..14fd972 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -120,7 +120,10 @@ int			maintenance_work_mem = 16384;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
+int			SessionPoolPorts = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index f9b3309..571c80f 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -65,7 +65,7 @@
 
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
+void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -180,7 +180,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..f82c2cb 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -1871,6 +1871,44 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_size", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"session_pool_ports", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+		 gettext_noop("Number of session ports = number of session pools."),
+		 gettext_noop("Number of extra parts which PostgreSQL will listen to accept client session. Each such port has separate session pool."
+					  "It is intended that each port corresponds to some particular database/user combination, so that all backends in this session "
+					  "pool will handle connection accessing this database. If session_pool_port is non zero then postmaster will always spawn dedicated (non-pooling) "
+					  " backends at the main Postgres port. If session_pool_port is zero and session_pool_size is not zero, then sessions (pooled connection) will be also "
+					  "accepted at main port. Session pool ports are allocatged sequentially: if Postgres main port is 5432 and session_pool_ports is 2, "
+					  "then ports 5433 and 5434 will be used for connection pooling.")
+	    },
+		&SessionPoolPorts,
+		0, 0, MAX_SESSION_PORTS,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
@@ -5104,6 +5142,95 @@ NewGUCNestLevel(void)
 }
 
 /*
+ * Set GUCs for this session
+ */
+void
+RestoreSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	if (session == NULL)
+		return;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		void* old_extra = sg->var->extra;
+		sg->var->extra = sg->val.extra;
+		switch (sg->var->vartype)
+		{
+		  case PGC_BOOL:
+		  {
+			  struct config_bool *conf = (struct config_bool*)sg->var;
+			  bool oldval = *conf->variable;
+			  *conf->variable = sg->val.val.boolval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.boolval, sg->val.extra);
+			  sg->val.val.boolval = oldval;
+			  break;
+		  }
+		  case PGC_INT:
+		  {
+			  struct config_int *conf = (struct config_int*)sg->var;
+			  int oldval = *conf->variable;
+			  *conf->variable = sg->val.val.intval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.intval, sg->val.extra);
+			  sg->val.val.intval = oldval;
+			  break;
+		  }
+		  case PGC_REAL:
+		  {
+			  struct config_real *conf = (struct config_real*)sg->var;
+			  double oldval = *conf->variable;
+			  *conf->variable = sg->val.val.realval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.realval, sg->val.extra);
+			  sg->val.val.realval = oldval;
+			  break;
+		  }
+		  case PGC_STRING:
+		  {
+			  struct config_string *conf = (struct config_string*)sg->var;
+			  char* oldval = *conf->variable;
+			  *conf->variable = sg->val.val.stringval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.stringval, sg->val.extra);
+			  sg->val.val.stringval = oldval;
+			  break;
+		  }
+		  case PGC_ENUM:
+		  {
+			  struct config_enum *conf = (struct config_enum*)sg->var;
+			  int oldval = *conf->variable;
+			  *conf->variable = sg->val.val.enumval;
+			  if (conf->assign_hook)
+				  conf->assign_hook(sg->val.val.enumval, sg->val.extra);
+			  sg->val.val.enumval = oldval;
+			  break;
+		  }
+		}
+		sg->val.extra = old_extra;
+	}
+}
+
+/*
+ * Deallocate memory for session GUCs
+ */
+void
+ReleaseSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		if (sg->val.extra)
+			set_extra_field(sg->var, &sg->val.extra, NULL);
+		if (sg->var->vartype == PGC_STRING)
+		{
+			struct config_string* conf = (struct config_string*)sg->var;
+			set_string_field(conf, &sg->val.val.stringval, NULL);
+		}
+	}
+}
+
+/*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
  * transient assignment to some GUC variables.  (The name is thus a bit of
@@ -5172,7 +5299,42 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 				else if (stack->state == GUC_SET)
 				{
 					/* we keep the current active value */
-					discard_stack_value(gconf, &stack->prior);
+					if (ActiveSession)
+					{
+						SessionGUC* sg;
+						for (sg = ActiveSession->gucs; sg != NULL && sg->var != gconf; sg = sg->next);
+						if (sg == NULL)
+						{
+							sg = MemoryContextAllocZero(ActiveSession->memory,
+														sizeof(SessionGUC));
+							sg->var = gconf;
+							sg->next = ActiveSession->gucs;
+							ActiveSession->gucs = sg;
+						}
+						switch (gconf->vartype)
+						{
+						  case PGC_BOOL:
+							sg->val.val.boolval = stack->prior.val.boolval;
+							break;
+						  case PGC_INT:
+							sg->val.val.intval = stack->prior.val.intval;
+							break;
+						  case PGC_REAL:
+							sg->val.val.realval = stack->prior.val.realval;
+							break;
+						  case PGC_STRING:
+							sg->val.val.stringval = stack->prior.val.stringval;
+							break;
+						  case PGC_ENUM:
+							sg->val.val.enumval = stack->prior.val.enumval;
+							break;
+						}
+						sg->val.extra = stack->prior.extra;
+					}
+					else
+					{
+						discard_stack_value(gconf, &stack->prior);
+					}
 				}
 				else			/* must be GUC_LOCAL */
 					restorePrior = true;
@@ -5197,8 +5359,8 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 
 					case GUC_SET:
 						/* next level always becomes SET */
-						discard_stack_value(gconf, &stack->prior);
-						if (prev->state == GUC_SET_LOCAL)
+					    discard_stack_value(gconf, &stack->prior);
+					    if (prev->state == GUC_SET_LOCAL)
 							discard_stack_value(gconf, &prev->masked);
 						prev->state = GUC_SET;
 						break;
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..cb5f8d4 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(char const* sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 2e7725d..9169b21 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -71,6 +71,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 54ee273..66d7e33 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -157,6 +157,9 @@ extern PGDLLIMPORT char *DataDir;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int SessionPoolPorts;
 extern PGDLLIMPORT int max_worker_processes;
 extern int	max_parallel_workers;
 
@@ -175,6 +178,8 @@ extern char pkglib_path[];
 extern char postgres_exec_path[];
 #endif
 
+#define MAX_SESSION_PORTS	8
+
 /*
  * done in storage/backendid.h for now.
  *
@@ -420,6 +425,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
diff --git a/src/include/port.h b/src/include/port.h
index 3e528fa..8a0ac98 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index d31c28f..e667434 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -447,6 +447,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -456,6 +457,7 @@ int			pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen);
 int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout);
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 const char *pgwin32_socket_strerror(int err);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..c9527c9 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,9 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone, MemoryContext memctx);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index a4bcb48..10f30d1 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index 5c19a61..11eded3 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -21,6 +21,7 @@
 #include "storage/lock.h"
 #include "storage/pg_sema.h"
 #include "storage/proclist_types.h"
+#include "utils/guc_tables.h"
 
 /*
  * Each backend advertises up to PGPROC_MAX_CACHED_SUBXIDS TransactionIds
@@ -273,6 +274,29 @@ extern PGDLLIMPORT PROC_HDR *ProcGlobal;
 
 extern PGPROC *PreparedXactProcs;
 
+typedef struct SessionGUC
+{
+	struct SessionGUC* next;
+	config_var_value   val;
+	struct config_generic *var;
+} SessionGUC;
+
+/*
+ * Information associated with client session
+ */
+typedef struct SessionContext
+{
+	MemoryContext memory; /* memory context used for global session data (replacement of TopMemoryContext) */
+	struct Port* port;           /* connection port */
+	char*        id;             /* session identifier used to construct unique prepared statement names */
+	Oid          tempNamespace;  /* temporary namespace */
+	Oid          tempToastNamespace;  /* temporary toast namespace */
+	SessionGUC*  gucs;
+} SessionContext;
+
+
+extern PGDLLIMPORT SessionContext *ActiveSession; 
+
 /* Accessor for PGPROC given a pgprocno. */
 #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])
 
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..191eeaa 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -34,6 +34,7 @@ extern CommandDest whereToSendOutput;
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 77daa5a..86e89e8 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -394,6 +394,12 @@ extern Size EstimateGUCStateSpace(void);
 extern void SerializeGUCState(Size maxsize, char *start_address);
 extern void RestoreGUCState(void *gucstate);
 
+/* Session polling support function */
+struct SessionContext;
+extern void RestoreSessionGUCs(struct SessionContext* session);
+extern void ReleaseSessionGUCs(struct SessionContext* session);
+
+
 /* Support for messages reported from GUC check hooks */
 
 extern PGDLLIMPORT char *GUC_check_errmsg_string;
#53Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#52)
Re: Built-in connection pooling

On 06.04.2018 20:03, Konstantin Knizhnik wrote:

On 06.04.2018 20:00, Konstantin Knizhnik wrote:

Attached please find new version of the patch with  several bug fixes
+ support of more than one session pools associated with different
ports.
Now it is possible to make postmaster listen several ports for
accepting pooled connections, while leaving main Postgres port for
dedicated backends.
Each session pool is intended to be used for particular database/user
combination.

Sorry, wrong patch was attached.

Development in built-in connection pooling will be continued in
https://github.com/postgrespro/postgresql.builtin_pool.git
I am not going to send new patches to hackers mailing list any more.
The last added feature is support of idle_in_transaction_session_timeout
which is especially critical for builtin pool with transaction-level
scheduling because long transaction can block other sessions executed at
this backend.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#54Nikolay Samokhvalov
samokhvalov@gmail.com
In reply to: Konstantin Knizhnik (#53)
Re: Built-in connection pooling

On Fri, Apr 13, 2018 at 2:59 AM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Development in built-in connection pooling will be continued in
https://github.com/postgrespro/postgresql.builtin_pool.git
I am not going to send new patches to hackers mailing list any more.

Why?

#55Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Nikolay Samokhvalov (#54)
Re: Built-in connection pooling

On 13.04.2018 19:07, Nikolay Samokhvalov wrote:

On Fri, Apr 13, 2018 at 2:59 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

Development in built-in connection pooling will be continued in
https://github.com/postgrespro/postgresql.builtin_pool.git
<https://github.com/postgrespro/postgresql.builtin_pool.git&gt;
I am not going to send new patches to hackers mailing list any more.

Why?

Just do not want to spam hackers with a lot of patches.
Also since I received few feedbacks in this thread, I consider that this
topic is not so interesting for community.

Please notice that built-in connection pool is conn_pool branch.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#56Nikolay Samokhvalov
samokhvalov@gmail.com
In reply to: Konstantin Knizhnik (#55)
Re: Built-in connection pooling

Understood.

One more question. Have you considered creation of pooling tool as a
separate, not built-in tool, but being shipped with Postgres — like psql is
shipped in packages usually called “postgresql-client-XX” which makes psql
the default tool to work in terminal? I constantly hear opinion from
various users, that Postgres needs “default”/official pooling tool.

вт, 17 апр. 2018 г. в 0:44, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>:

Show quoted text

On 13.04.2018 19:07, Nikolay Samokhvalov wrote:

On Fri, Apr 13, 2018 at 2:59 AM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Development in built-in connection pooling will be continued in
https://github.com/postgrespro/postgresql.builtin_pool.git
I am not going to send new patches to hackers mailing list any more.

Why?

Just do not want to spam hackers with a lot of patches.
Also since I received few feedbacks in this thread, I consider that this
topic is not so interesting for community.

Please notice that built-in connection pool is conn_pool branch.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#57Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Nikolay Samokhvalov (#56)
Re: Built-in connection pooling

On 17.04.2018 20:09, Nikolay Samokhvalov wrote:

Understood.

One more question. Have you considered creation of pooling tool as a
separate, not built-in tool, but being shipped with Postgres — like
psql is shipped in packages usually called “postgresql-client-XX”
which makes psql the default tool to work in terminal? I constantly
hear opinion from various users, that Postgres needs
“default”/official pooling tool.

There were a lot of discussions in hackers and in other mailing
lists/forums concerning PostgreSQL and connection pooling.
From  the point of view of many PostgreSQL users which I know myself,
lack of standard (built-in?) connection pooling is one of the main
drawbacks of PostgreSQL.
Right now  we have pgbouncer which is small, fast and reliable but
- Doesn't allow you to use prepared statements, temporary table and
session variables.
- Is single threaded, so becomes bottleneck for large (>100) number of
active connections
- Can not be used for load balancing for hot standby replicas

So if you have a lot of active connections, you will have to setup pool
of pgbouncers.
There is also pgpool  which supports load balancing, but doesn't perform
session pooling. So it has to be used together with pgbouncer.
So to be able to use Postgres in enterprise system you will have to
setup very complex pipeline of different tools.

Definitely we need some standard solution for it. As far as I know,
Yandex is now working on their own version of external connection pooler
which can eliminate single-threaded limitation of pgbouncer.
Unfortunately their presentation was not accepted for pgconf (as well as
my presentation about built-in connection pooling).

External connection pooler definitely provides more flexibility than
built-in connection pooler. It can be installed either at client side,
either at server side, either somewhere between them.
Alos it is more reliable, because it changes nothing in Postgres
architecture.
But there are still use cases which can not be covered y external
connection pooler.
1C company (Russian SAP) at presentation at PgConf.ru 2018 mentioned
that lack of internal pooling is the main limitationg factor for
replacing MS-SQL with Postgres.
They have a lot of clients which never close connections. And they need
persistent session because of wide use of temporary tables.
This is why 1C can not use pgbouncer. We now try to provide to them
prototype version of Postgres with builtin connection pool.
If results of such experiments will be successful, we will propose this
connection pooler to community (but it available right now, so anybody
who want can test it).

вт, 17 апр. 2018 г. в 0:44, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>:

On 13.04.2018 19:07, Nikolay Samokhvalov wrote:

On Fri, Apr 13, 2018 at 2:59 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>
wrote:

Development in built-in connection pooling will be continued
in https://github.com/postgrespro/postgresql.builtin_pool.git
I am not going to send new patches to hackers mailing list
any more.

Why?

Just do not want to spam hackers with a lot of patches.
Also since I received few feedbacks in this thread, I consider
that this topic is not so interesting for community.

Please notice that built-in connection pool is conn_pool branch.

--
Konstantin Knizhnik
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#58Heikki Linnakangas
hlinnaka@iki.fi
In reply to: Konstantin Knizhnik (#57)
Re: Built-in connection pooling

On 18/04/18 06:10, Konstantin Knizhnik wrote:

But there are still use cases which can not be covered y external
connection pooler.

Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental
issues that can *only* be addressed by a built-in implementation?

For the record, I think an internal connection pool might be a good
idea. It would presumably be simpler to set up than an external one, for
example. But it depends a lot on the implementation. If we had an
internal connection pool, I would expect it to be very transparent to
the user, be simple to set up, and not have annoying limitations with
prepared statements, temporary tables, etc. that the existing external
ones have.

However, I suspect that dealing with *all* of the issues is going to be
hard and tedious. And if there are any significant gaps, things that
don't work correctly with the pooler, the patch will almost certainly be
rejected.

I'd recommend that you put your effort in improving the existing
external connection poolers. Which one is closest to suit your needs?
What's missing?

There are probably things we could do in the server, to help external
connection poolers. For example, some kind of a proxy authentication,
where the connection pooler could ask the backend to do authentication
on its behalf, so that you wouldn't need to re-implement the server-side
authentication code in the external pooler. Things like that.

- Heikki

#59Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Heikki Linnakangas (#58)
Re: Built-in connection pooling

On 18.04.2018 13:36, Heikki Linnakangas wrote:

On 18/04/18 06:10, Konstantin Knizhnik wrote:

But there are still use cases which can not be covered y external
connection pooler.

Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental
issues that can *only* be addressed by a built-in implementation?

Well, may be I missed something, but i do not know how to efficiently
support
1. Temporary tables
2. Prepared statements
3. Sessoin GUCs
with any external connection pooler (with pooling level other than session).

The problem with GUCs seems to be the easiest from this thee: we can
just keep list of GUC assignments and prepend it to each statement. But
it is not so efficient and can cause some problems (for example there
are some statements, which can not be executed in multistatement context).

Prepared statement problem can be fixed either by implementing shared
plan cache, either by autoprepare (I have proposed patch for it).

But  concerning temporary table I do not know any acceptable solution.

For the record, I think an internal connection pool might be a good
idea. It would presumably be simpler to set up than an external one,
for example. But it depends a lot on the implementation. If we had an
internal connection pool, I would expect it to be very transparent to
the user, be simple to set up, and not have annoying limitations with
prepared statements, temporary tables, etc. that the existing external
ones have.

However, I suspect that dealing with *all* of the issues is going to
be hard and tedious. And if there are any significant gaps, things
that don't work correctly with the pooler, the patch will almost
certainly be rejected.

I'd recommend that you put your effort in improving the existing
external connection poolers. Which one is closest to suit your needs?
What's missing?

Yandex team is following this approach with theirOdysseus (multithreaded
version of pgbouncer with many of pgbouncer issues fixed).
But it will not work for 1C which needs to keeps sessions (with
temporary tables, e.t.c) for large number of clients which never closes
connections.

There are probably things we could do in the server, to help external
connection poolers. For example, some kind of a proxy authentication,
where the connection pooler could ask the backend to do authentication
on its behalf, so that you wouldn't need to re-implement the
server-side authentication code in the external pooler. Things like that.

As far as I know most of DBMSes have some kind of internal connection
pooling.
Oracle, for example, you can create dedicated and non-dedicated backends.
I wonder why we do not want to have something similar in Postgres.
Any external connection pooler will be less convenient for users than
internal pooler.
It may be more flexible, more error protected, more scalable, .... But
still it is an extra entity which adds extra overhead and can also be
bottleneck or SPoF.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#60Craig Ringer
craig@2ndquadrant.com
In reply to: Konstantin Knizhnik (#59)
Re: Built-in connection pooling

On 18 April 2018 at 19:52, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

As far as I know most of DBMSes have some kind of internal connection
pooling.
Oracle, for example, you can create dedicated and non-dedicated backends.
I wonder why we do not want to have something similar in Postgres.

I want to, and I know many others to.

But the entire PostgreSQL architecture makes it hard to do well, and
means it requires heavy changes to do it in a way that will be
maintainable and reliable.

Making it work, and making something maintainble and mergeable, are
two different things. Something I continue to struggle with myself.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#61David Fetter
david@fetter.org
In reply to: Konstantin Knizhnik (#59)
Re: Built-in connection pooling

On Wed, Apr 18, 2018 at 02:52:39PM +0300, Konstantin Knizhnik wrote:

Yandex team is following this approach with theirOdysseus
(multithreaded version of pgbouncer with many of pgbouncer issues
fixed).

Have they opened the source to Odysseus? If not, do they have plans to?

Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

#62Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Craig Ringer (#60)
Re: Built-in connection pooling

On 18.04.2018 16:09, Craig Ringer wrote:

On 18 April 2018 at 19:52, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

As far as I know most of DBMSes have some kind of internal connection
pooling.
Oracle, for example, you can create dedicated and non-dedicated backends.
I wonder why we do not want to have something similar in Postgres.

I want to, and I know many others to.

But the entire PostgreSQL architecture makes it hard to do well, and
means it requires heavy changes to do it in a way that will be
maintainable and reliable.

Making it work, and making something maintainble and mergeable, are
two different things. Something I continue to struggle with myself.

Here I completely agree with you.
Now my prototype "works": it is able to correctly handle errors,
transaction rollbacks, long living transactions,... but I am completely
sure that there are a lot of not tested cases when it will work
incorrectly. But still I do not think that making built-in connection
pooling really reliable is something unreachable.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#63Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: David Fetter (#61)
Re: Built-in connection pooling

On 18.04.2018 16:24, David Fetter wrote:

On Wed, Apr 18, 2018 at 02:52:39PM +0300, Konstantin Knizhnik wrote:

Yandex team is following this approach with theirOdysseus
(multithreaded version of pgbouncer with many of pgbouncer issues
fixed).

Have they opened the source to Odysseus? If not, do they have plans to?

It is better to ask Valdimir Borodin (Yandex) about it.
But as far as I know - the answer is yes.
The Yandex policy is to make there products available for community.
I just wonder why it was not interested to community to know details of
this project...

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#64Heikki Linnakangas
hlinnaka@iki.fi
In reply to: Konstantin Knizhnik (#59)
Re: Built-in connection pooling

On 18/04/18 07:52, Konstantin Knizhnik wrote:

On 18.04.2018 13:36, Heikki Linnakangas wrote:

On 18/04/18 06:10, Konstantin Knizhnik wrote:

But there are still use cases which can not be covered y external
connection pooler.

Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental
issues that can *only* be addressed by a built-in implementation?

Well, may be I missed something, but i do not know how to efficiently
support
1. Temporary tables
2. Prepared statements
3. Sessoin GUCs
with any external connection pooler (with pooling level other than session).

Me neither. What makes it easier to do these things in an internal
connection pooler? What could the backend do differently, to make these
easier to implement in an external pooler?

- Heikki

#65Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Heikki Linnakangas (#64)
Re: Built-in connection pooling

On 18.04.2018 16:41, Heikki Linnakangas wrote:

On 18/04/18 07:52, Konstantin Knizhnik wrote:

On 18.04.2018 13:36, Heikki Linnakangas wrote:

On 18/04/18 06:10, Konstantin Knizhnik wrote:

But there are still use cases which can not be covered y external
connection pooler.

Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental
issues that can *only* be addressed by a built-in implementation?

Well, may be I missed something, but i do not know how to efficiently
support
1. Temporary tables
2. Prepared statements
3. Sessoin GUCs
with any external connection pooler (with pooling level other than
session).

Me neither. What makes it easier to do these things in an internal
connection pooler? What could the backend do differently, to make
these easier to implement in an external pooler?

All this things are addressed now in my builtin connection pool
implementation:
1. Temporary tables are maintained by creation of private temporary
namespace for each session
2. Prepared statements are supported by adding unique session prefix to
each prepared statement name (so there is single prepare statement cache
in backend, but each session has its own prepared statements)
3.  Each session maintains list of updated GUCs and them are
saved/restored on reschedule.

It was not so difficult to implement all this stuff (the main troubles I
had with GUCs), but looks like none of them are possible fort external
connection pooler.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#66Vladimir Borodin
root@simply.name
In reply to: David Fetter (#61)
Re: Built-in connection pooling

18 апр. 2018 г., в 16:24, David Fetter <david@fetter.org> написал(а):

On Wed, Apr 18, 2018 at 02:52:39PM +0300, Konstantin Knizhnik wrote:

Yandex team is following this approach with theirOdysseus
(multithreaded version of pgbouncer with many of pgbouncer issues
fixed).

Have they opened the source to Odysseus? If not, do they have plans to?

No, we haven't yet. Yep, we plan to do it until end on May.

Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
May the force be with you…
https://simply.name

#67Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#59)
RE: Built-in connection pooling

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Oracle, for example, you can create dedicated and non-dedicated backends.

I wonder why we do not want to have something similar in Postgres.

Yes, I want it, too. In addition to dedicated and shared server processes, Oracle provides Database Resident Connection Pooling (DRCP). I guessed you were inspired by this.

https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc002.htm#ADMIN12348

BTW, you are doing various great work -- autoprepare, multithreaded Postgres, built-in connection pooling, etc. etc., aren't you? Are you doing all of these alone?

Regards
Takayuki Tsunakawa

#68Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tsunakawa, Takayuki (#67)
Re: Built-in connection pooling

On 19.04.2018 07:46, Tsunakawa, Takayuki wrote:

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Oracle, for example, you can create dedicated and non-dedicated backends.

I wonder why we do not want to have something similar in Postgres.

Yes, I want it, too. In addition to dedicated and shared server processes, Oracle provides Database Resident Connection Pooling (DRCP). I guessed you were inspired by this.

https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc002.htm#ADMIN12348

It seems to be that my connection pooling is more close to DRCP than to
shared servers.
It is not clear from this article what this 35KB per client connection
are used for...
It seems to be some thing similar with session context used to
suspend/resume session.
In my prototype I also maintain some per-session context to keep values
of session specific GUCs, temporary namespace, ...
Definitely pooled session memory footprint depends on size of catalog,
prepared statements, updated GUCs,... but 10-100kb seems to be a
reasonable estimation.

BTW, you are doing various great work -- autoprepare, multithreaded Postgres, built-in connection pooling, etc. etc., aren't you? Are you doing all of these alone?

Yes, but there is huge distance from prototype till product-ready
solution. And definitely I need some help here. This is why I have to
suspend future development of multithreaded version of Postgres (looks
like it is not considered as some realistic project by community).
But with builtin connection pooling situation is better and I am going
to tests it with some our clients which are interested in this feature.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#69Dave Cramer
davecramer@gmail.com
In reply to: Konstantin Knizhnik (#68)
Re: Built-in connection pooling

On Thu, Apr 19, 2018, 9:24 AM Konstantin Knizhnik, <
k.knizhnik@postgrespro.ru> wrote:

On 19.04.2018 07:46, Tsunakawa, Takayuki wrote:

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Oracle, for example, you can create dedicated and non-dedicated backends.

I wonder why we do not want to have something similar in Postgres.

Yes, I want it, too. In addition to dedicated and shared server

processes, Oracle provides Database Resident Connection Pooling (DRCP). I
guessed you were inspired by this.

https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc002.htm#ADMIN12348

It seems to be that my connection pooling is more close to DRCP than to
shared servers.
It is not clear from this article what this 35KB per client connection
are used for...
It seems to be some thing similar with session context used to
suspend/resume session.
In my prototype I also maintain some per-session context to keep values
of session specific GUCs, temporary namespace, ...
Definitely pooled session memory footprint depends on size of catalog,
prepared statements, updated GUCs,... but 10-100kb seems to be a
reasonable estimation.

BTW, you are doing various great work -- autoprepare, multithreaded

Postgres, built-in connection pooling, etc. etc., aren't you? Are you
doing all of these alone?
Yes, but there is huge distance from prototype till product-ready
solution. And definitely I need some help here. This is why I have to
suspend future development of multithreaded version of Postgres (looks
like it is not considered as some realistic project by community).
But with builtin connection pooling situation is better and I am going
to tests it with some our clients which are interested in this feature.

Konstantin

It would be useful to test with the JDBC driver

We run into issues with many pool implementations due to our opinionated
nature

Thanks

Dave

Show quoted text

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#70Andres Freund
andres@anarazel.de
In reply to: Heikki Linnakangas (#58)
Re: Built-in connection pooling

On 2018-04-18 06:36:38 -0400, Heikki Linnakangas wrote:

On 18/04/18 06:10, Konstantin Knizhnik wrote:

But there are still use cases which can not be covered y external
connection pooler.

Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental issues
that can *only* be addressed by a built-in implementation?

For the record, I think an internal connection pool might be a good idea. It
would presumably be simpler to set up than an external one, for example. But
it depends a lot on the implementation. If we had an internal connection
pool, I would expect it to be very transparent to the user, be simple to set
up, and not have annoying limitations with prepared statements, temporary
tables, etc. that the existing external ones have.

However, I suspect that dealing with *all* of the issues is going to be hard
and tedious. And if there are any significant gaps, things that don't work
correctly with the pooler, the patch will almost certainly be rejected.

I'd recommend that you put your effort in improving the existing external
connection poolers. Which one is closest to suit your needs? What's missing?

There are probably things we could do in the server, to help external
connection poolers. For example, some kind of a proxy authentication, where
the connection pooler could ask the backend to do authentication on its
behalf, so that you wouldn't need to re-implement the server-side
authentication code in the external pooler. Things like that.

FWIW, I think that's not the right course. We should work towards an
in-core pooler. There's very few postgres installations that don't need
one, and there's a lot of things that are very hard to do without closer
integration.

Greetings,

Andres Freund

#71Stephen Frost
sfrost@snowman.net
In reply to: Andres Freund (#70)
Re: Built-in connection pooling

Greetings,

* Andres Freund (andres@anarazel.de) wrote:

On 2018-04-18 06:36:38 -0400, Heikki Linnakangas wrote:

On 18/04/18 06:10, Konstantin Knizhnik wrote:

But there are still use cases which can not be covered y external
connection pooler.

Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental issues
that can *only* be addressed by a built-in implementation?

For the record, I think an internal connection pool might be a good idea. It
would presumably be simpler to set up than an external one, for example. But
it depends a lot on the implementation. If we had an internal connection
pool, I would expect it to be very transparent to the user, be simple to set
up, and not have annoying limitations with prepared statements, temporary
tables, etc. that the existing external ones have.

However, I suspect that dealing with *all* of the issues is going to be hard
and tedious. And if there are any significant gaps, things that don't work
correctly with the pooler, the patch will almost certainly be rejected.

I'd recommend that you put your effort in improving the existing external
connection poolers. Which one is closest to suit your needs? What's missing?

There are probably things we could do in the server, to help external
connection poolers. For example, some kind of a proxy authentication, where
the connection pooler could ask the backend to do authentication on its
behalf, so that you wouldn't need to re-implement the server-side
authentication code in the external pooler. Things like that.

FWIW, I think that's not the right course. We should work towards an
in-core pooler. There's very few postgres installations that don't need
one, and there's a lot of things that are very hard to do without closer
integration.

I tend to agree with this and things like trying to proxy authentication
are really not ideal, since it involves necessairly trusting another
system. Perhaps it'd be nice to be able to proxy auth cleanly, and in
some cases it may be required to have another system involved (I've
certainly seen cases of multi-layered pgbouncer), but I'd rather only do
that when we need to instead of almost immediately...

Thanks!

Stephen

#72Tom Lane
tgl@sss.pgh.pa.us
In reply to: Stephen Frost (#71)
Re: Built-in connection pooling

Stephen Frost <sfrost@snowman.net> writes:

Greetings,
* Andres Freund (andres@anarazel.de) wrote:

On 2018-04-18 06:36:38 -0400, Heikki Linnakangas wrote:

However, I suspect that dealing with *all* of the issues is going to be hard
and tedious. And if there are any significant gaps, things that don't work
correctly with the pooler, the patch will almost certainly be rejected.

FWIW, I think that's not the right course. We should work towards an
in-core pooler. There's very few postgres installations that don't need
one, and there's a lot of things that are very hard to do without closer
integration.

I tend to agree with this and things like trying to proxy authentication
are really not ideal, since it involves necessairly trusting another
system.

FWIW, I concur with Heikki's position that we're going to have very high
standards for the transparency of any in-core pooler. Before trying to
propose a patch, it'd be a good idea to try to fix the perceived
shortcomings of some existing external pooler. Only after you can say
"there's nothing wrong with this that isn't directly connected to its
not being in-core" does it make sense to try to push the logic into core.

regards, tom lane

#73Christopher Browne
cbbrowne@gmail.com
In reply to: Dave Cramer (#69)
Re: Built-in connection pooling

On Thu, 19 Apr 2018 at 10:27, Dave Cramer <davecramer@gmail.com> wrote:

It would be useful to test with the JDBC driver

We run into issues with many pool implementations due to our opinionated

nature

Absolutely.

And Java developers frequently have a further opinionated nature on this...

A bunch of Java frameworks include connection pools...

1. BoneCP, which claims to be the "tuned to be fast" one
http://jolbox.com/

2. Apache Commons DBCP, which is the "we're Apache, we're everywhere!" one
http://commons.apache.org/dbcp/

3. c3p0 - easy-to-add Connection Pool
http://www.mchange.com/projects/c3p0

One of the things that they find likable is that by having the connection
pool live
in the framework alongside the application is that this makes it easy to
attach
hooks so that the pool can do intelligent things based on application-aware
logic.

When we're in "DB server centric" mindset, we'll have some particular ideas
as
to what a pool ought to be able to do; if that doesn't include their ideas
at all,
it'll lead to the Java guys thinking that what we have is quaint and
uninteresting.

I suspect that this disconnect and somewhat "great divide" is an extra
reason
why proposals to bring connection pooling into core don't get too far.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

#74Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#72)
Re: Built-in connection pooling

On 2018-04-19 15:01:24 -0400, Tom Lane wrote:

Only after you can say "there's nothing wrong with this that isn't
directly connected to its not being in-core" does it make sense to try
to push the logic into core.

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler
- temporary tables
- prepared statements
- GUCs and other session state

I think there's at least one thing that we should attempt to make
easier for external pooler:
- proxy authorization

I think in an "ideal world" there's two kinds of poolers: Dumb ones
further out from the database (for short lived processes, keeping the
total number of connections sane, etc) and then more intelligent one
closer to the database.

Greetings,

Andres Freund

#75Tatsuo Ishii
ishii@sraoss.co.jp
In reply to: Andres Freund (#74)
Re: Built-in connection pooling

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler

This is only applied to external process type pooler (like Pgpool-II).

- temporary tables
- prepared statements
- GUCs and other session state

These are only applied to "non session based" pooler; sharing a
database connection with multiple client connections. "Session based"
connection pooler like Pgpool-II does not have the shortcomings.

One thing either built-in or application library type pooler (like
JDBC) cannot do is, handle multiple PostgreSQL servers.

I think there's at least one thing that we should attempt to make
easier for external pooler:
- proxy authorization

Yeah. Since SCRAM auth is implemented, some connection poolers
including Pgpool-II are struggling to adopt it.

Another thing PostgreSQL can do to make external pooler's life easier
is, enhancing frontend/backend protocol so that reply messages of
prepare etc. include portal/statement info. But apparently this needs
protocol changes.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

#76Michael Paquier
michael@paquier.xyz
In reply to: Tatsuo Ishii (#75)
Re: Built-in connection pooling

On Fri, Apr 20, 2018 at 07:58:00AM +0900, Tatsuo Ishii wrote:

Yeah. Since SCRAM auth is implemented, some connection poolers
including Pgpool-II are struggling to adopt it.

Er, well. pgpool is also taking advantage of MD5 weaknesses... While
SCRAM fixes this class of problems, and channel binding actually makes
this harder for poolers to deal with.
--
Michael

#77Tatsuo Ishii
ishii@sraoss.co.jp
In reply to: Michael Paquier (#76)
Re: Built-in connection pooling

On Fri, Apr 20, 2018 at 07:58:00AM +0900, Tatsuo Ishii wrote:

Yeah. Since SCRAM auth is implemented, some connection poolers
including Pgpool-II are struggling to adopt it.

Er, well. pgpool is also taking advantage of MD5 weaknesses... While
SCRAM fixes this class of problems, and channel binding actually makes
this harder for poolers to deal with.

One of Pgpool-II developers Usama are working hard to re-implement
SCRAM auth for upcoming Pgpool-II 4.0: i.e. storing passwords (of
course in some encrypted form) in Pgpool-II.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

#78Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Christopher Browne (#73)
Re: Built-in connection pooling

Christopher>One of the things that they find likable is that by having the
connection
pool live
Christopher>in the framework alongside the application is that this makes
it easy to
attach
Christopher>hooks so that the pool can do intelligent things based on
application-aware
logic.

I'm afraid I do not follow you. Can you please provide an example?

TL;DR:
1) I think in-application pooling would be required for performance reasons
in any case.
2) Out-of-application pooling (in-backend or in-the-middle) is likely
needed as well

JDBC clients use client-side connection pooling for performance reasons:

1) Connection setup does have overhead:
1.1) TCP connection takes time to init/close
1.2) Startup queries involve a couple of roundrips: "startup packet", then
"SET extra_float_digits = 3", then "SET application_name = '...' "
2) Binary formats on the wire are tied to oids. Clients have to cache the
oids somehow, and "cache per connection" is the current approach.
3) Application threads tend to augment "application_name", "search_path",
etc for its own purposes, and it would slow the application down
significantly if JDBC driver reverted application_name/search_path/etc for
each and every "connection borrow".
4) I believe there's non-zero overhead for backend process startup

As Konstantin lists in the initial email, the problem is backend itself
does not scale well with lots of backend processes.
In other words: it is fine if PostgreSQL is accessed by a single Java
application since the number of connections would be reasonable (limited by
the Java connection pool).
That, however, is not the case when the DB is accessed by lots of
applications (==lots of idle connections) and/or in case the application is
using short-lived connections (==in-app pool is missing that forces backend
processes to come and go).

Vladimir

#79Vladimir Sitnikov
sitnikov.vladimir@gmail.com
In reply to: Konstantin Knizhnik (#53)
Re: Built-in connection pooling

Development in built-in connection pooling will be continued in

https://github.com/postgrespro/postgresql.builtin_pool.git

The branch (as of 0020c44195992c6dce26baec354a5e54ff30b33f) passes pgjdbc
tests: https://travis-ci.org/vlsi/pgjdbc/builds/368997672

Current tests are mostly single-threaded, so the tests are unlikely to
trigger lots of "concurrent connection" uses.
The next step might be to create multiple schemas, and execute multiple
tests in parallel.

Vladimir

#80Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Dave Cramer (#69)
Re: Built-in connection pooling

On 19.04.2018 17:27, Dave Cramer wrote:

On Thu, Apr 19, 2018, 9:24 AM Konstantin Knizhnik,
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

On 19.04.2018 07:46, Tsunakawa, Takayuki wrote:

From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru

<mailto:k.knizhnik@postgrespro.ru>]

Oracle, for example, you can create dedicated and non-dedicated

backends.

I wonder why we do not want to have something similar in Postgres.

Yes, I want it, too.  In addition to dedicated and shared server

processes, Oracle provides Database Resident Connection Pooling
(DRCP).  I guessed you were inspired by this.

https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc002.htm#ADMIN12348

It seems to be that my connection pooling is more close to DRCP
than to
shared servers.
It is not clear from this article what this 35KB per client
connection
are used for...
It seems to be some thing similar with session context used to
suspend/resume session.
In my prototype I also maintain some per-session context to keep
values
of session specific GUCs, temporary namespace, ...
Definitely pooled session memory footprint depends on size of
catalog,
prepared statements, updated GUCs,... but 10-100kb seems to be a
reasonable estimation.

BTW, you are doing various great work -- autoprepare,

multithreaded Postgres, built-in connection pooling, etc. etc.,
aren't you?  Are you doing all of these alone?
Yes, but there is huge distance from prototype till product-ready
solution. And definitely I need some help here. This is why I have to
suspend future development of multithreaded version of Postgres
(looks
like it is not considered as some realistic project by community).
But with builtin connection pooling situation is better and I am
going
to tests it with some our clients which are interested in this
feature.

Konstantin

It would be useful to test with the JDBC driver

We run into issues with many pool implementations due to our
opinionated nature

I have tested built-in connection pool with YCSB benchmark which is
implemented in Java and so works through JDBC driver.
Results were published in the following mail in this thread:
/messages/by-id/7bbbb359-c582-7a08-5772-cb882988c0ae@postgrespro.ru

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#81Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tatsuo Ishii (#75)
Re: Built-in connection pooling

On 20.04.2018 01:58, Tatsuo Ishii wrote:

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler

This is only applied to external process type pooler (like Pgpool-II).

- temporary tables
- prepared statements
- GUCs and other session state

These are only applied to "non session based" pooler; sharing a
database connection with multiple client connections. "Session based"
connection pooler like Pgpool-II does not have the shortcomings.

But them are not solving the main problem: restricting number of
launched backends.
Pgbouncer  also can be used in session pooling mode. But  it makes sense
only if there is limited number of clients which permanently
connect/disconnect to the database.
But I do not think that it is so popular use case. Usually there is very
large number of connected clients which rarely drop connection but only
few of them are active at each moment of time.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#82Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tatsuo Ishii (#77)
Re: Built-in connection pooling

On 20.04.2018 03:14, Tatsuo Ishii wrote:

On Fri, Apr 20, 2018 at 07:58:00AM +0900, Tatsuo Ishii wrote:

Yeah. Since SCRAM auth is implemented, some connection poolers
including Pgpool-II are struggling to adopt it.

Er, well. pgpool is also taking advantage of MD5 weaknesses... While
SCRAM fixes this class of problems, and channel binding actually makes
this harder for poolers to deal with.

One of Pgpool-II developers Usama are working hard to re-implement
SCRAM auth for upcoming Pgpool-II 4.0: i.e. storing passwords (of
course in some encrypted form) in Pgpool-II.

Just want to notice that authentication is are where I have completely
no experience.
So any suggestions or help  in developing right authentication mechanism
for built-in connection pooling is welcome.

Right authentication of pooled session by shared backend is performed in
the same way as by normal (dedicated) Postgres backend.
Postmaster just transfer accepted socket to one of the workers
(backends) and it performs authentication in normal way.
It actually means that all sessions scheduled to the same worker should
access the same database under the same user.
Accepting connections to different databases/users right now is
supported by making it possible to create several session pools and
binding each session pool to its own port at which postmaster will
accept connections to this page pool.

As alternative approach I considered spawning separate "authentication"
process (or do it in postmaster), which will process startup package and
only after it schedule session to one of the workers. But such policy is
much more difficult to implement and it is unclear how to map
database/user pairs to worker backends.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#83Tatsuo Ishii
ishii@sraoss.co.jp
In reply to: Konstantin Knizhnik (#81)
Re: Built-in connection pooling

On 20.04.2018 01:58, Tatsuo Ishii wrote:

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler

This is only applied to external process type pooler (like Pgpool-II).

- temporary tables
- prepared statements
- GUCs and other session state

These are only applied to "non session based" pooler; sharing a
database connection with multiple client connections. "Session based"
connection pooler like Pgpool-II does not have the shortcomings.

But them are not solving the main problem: restricting number of
launched backends.

Pgpool-II already does this. If number of concurrent clients exceeds
max_connections, max_connections+1 client have to wait until other
client disconnect the session. So "restricting number of launched
backends" is an indenpendet function from whether "session based"
connection poolers" is used or not.

Pgbouncer  also can be used in session pooling mode. But  it makes
sense only if there is limited number of clients which permanently
connect/disconnect to the database.
But I do not think that it is so popular use case. Usually there is
very large number of connected clients which rarely drop connection
but only few of them are active at each moment of time.

Not neccessarily. i.e. Session based poolers allow to use temporary
tables, prepared statements and keep GUCs and other session state,
while non session based poolers does not allow to use them.

So choosing "session based poolers" or "non session based poolers" is
a trade off. i.e. let user choose one of them.

If you are willing to merge your connection pooler into core, I would
suggest you'd better to implement those pool modes.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

#84Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tatsuo Ishii (#83)
Re: Built-in connection pooling

On 20.04.2018 11:16, Tatsuo Ishii wrote:

On 20.04.2018 01:58, Tatsuo Ishii wrote:

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler

This is only applied to external process type pooler (like Pgpool-II).

- temporary tables
- prepared statements
- GUCs and other session state

These are only applied to "non session based" pooler; sharing a
database connection with multiple client connections. "Session based"
connection pooler like Pgpool-II does not have the shortcomings.

But them are not solving the main problem: restricting number of
launched backends.

Pgpool-II already does this. If number of concurrent clients exceeds
max_connections, max_connections+1 client have to wait until other
client disconnect the session. So "restricting number of launched
backends" is an indenpendet function from whether "session based"
connection poolers" is used or not.

Sorry, but delaying new client connection until some other client is
disconnected is not an acceptable solution in most cases.
Most of customers want to provide connections to the database server for
unlimited (or at least > 100) number of clients.
And this clients used to keep connection alive and do not hangout after
execution of each statement/transaction.
In this case approach with session pooling dopesn't work.

Pgbouncer  also can be used in session pooling mode. But  it makes
sense only if there is limited number of clients which permanently
connect/disconnect to the database.
But I do not think that it is so popular use case. Usually there is
very large number of connected clients which rarely drop connection
but only few of them are active at each moment of time.

Not neccessarily. i.e. Session based poolers allow to use temporary
tables, prepared statements and keep GUCs and other session state,
while non session based poolers does not allow to use them.

So choosing "session based poolers" or "non session based poolers" is
a trade off. i.e. let user choose one of them.

If you are willing to merge your connection pooler into core, I would
suggest you'd better to implement those pool modes.

Sorry, may we do not understand each other.
There are the following facts:
1. There are some entities in Postgres which are local to a backend:
temporary tables, GUCs, prepared statement, relation and catalog caches,...
2. Postgres doesn't "like"  larger number of backends. Even only few of
them are actually active. Large number of backends means large
procarray, large snapshots,...
Please refere to my measurement at the beginning of this thread which
illustrate how performance of Pastgres degrades with increasing number
of backends.
3. Session semantic (prepared statements, GUCs, temporary tables) can be
supported only in session level pooling mode.
4. This mode is not acceptable in most cases because it is not possible
to limit number of clients which want to establish connection wither
database server or keep it small.
This is why most pgbouncer users are using statement pooling mode.
5. It doesn't matter how you manged to implement pooling outside
Postgres: if you want to preserve session semantic, then you need to
spawn as much backends as sessions. And number of clients is limited by
number of backends/sessions.

The primary idea and main benefit of built-in connection pooler is to
support session semantic with limited number of backends.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#85Tatsuo Ishii
ishii@sraoss.co.jp
In reply to: Konstantin Knizhnik (#84)
Re: Built-in connection pooling

This is only applied to external process type pooler (like Pgpool-II).

- temporary tables
- prepared statements
- GUCs and other session state

These are only applied to "non session based" pooler; sharing a
database connection with multiple client connections. "Session based"
connection pooler like Pgpool-II does not have the shortcomings.

But them are not solving the main problem: restricting number of
launched backends.

Pgpool-II already does this. If number of concurrent clients exceeds
max_connections, max_connections+1 client have to wait until other
client disconnect the session. So "restricting number of launched
backends" is an indenpendet function from whether "session based"
connection poolers" is used or not.

Sorry, but delaying new client connection until some other client is
disconnected is not an acceptable solution in most cases.

I just wanted to pointed out the counter fact against this.

But them are not solving the main problem: restricting number of
launched backends.

Most of customers want to provide connections to the database server
for unlimited (or at least > 100) number of clients.
And this clients used to keep connection alive and do not hangout
after execution of each statement/transaction.
In this case approach with session pooling dopesn't work.

I understand your customers like to have unlimited number of
connections. But my customers do not. (btw, even with normal
PostgreSQL, some of my customers are happily using over 1k, even 5k
max_connections).

Pgbouncer  also can be used in session pooling mode. But  it makes
sense only if there is limited number of clients which permanently
connect/disconnect to the database.
But I do not think that it is so popular use case. Usually there is
very large number of connected clients which rarely drop connection
but only few of them are active at each moment of time.

Not neccessarily. i.e. Session based poolers allow to use temporary
tables, prepared statements and keep GUCs and other session state,
while non session based poolers does not allow to use them.

So choosing "session based poolers" or "non session based poolers" is
a trade off. i.e. let user choose one of them.

If you are willing to merge your connection pooler into core, I would
suggest you'd better to implement those pool modes.

Sorry, may we do not understand each other.
There are the following facts:
1. There are some entities in Postgres which are local to a backend:
temporary tables, GUCs, prepared statement, relation and catalog
caches,...
2. Postgres doesn't "like"  larger number of backends. Even only few
of them are actually active. Large number of backends means large
procarray, large snapshots,...
Please refere to my measurement at the beginning of this thread which
illustrate how performance of Pastgres degrades with increasing number
of backends.
3. Session semantic (prepared statements, GUCs, temporary tables) can
be supported only in session level pooling mode.

I agree with 1 -3.

4. This mode is not acceptable in most cases because it is not
possible to limit number of clients which want to establish connection
wither database server or keep it small.
This is why most pgbouncer users are using statement pooling mode.

Not sure about 4. I rarely see such users around me.

5. It doesn't matter how you manged to implement pooling outside
Postgres: if you want to preserve session semantic, then you need to
spawn as much backends as sessions. And number of clients is limited
by number of backends/sessions.

Rigt. I am happy with the limitation for now.

The primary idea and main benefit of built-in connection pooler is to
support session semantic with limited number of backends.

I am confused. If so, why do you want to push statement based or
transaction based built-in connection pooler?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

#86Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tatsuo Ishii (#85)
Re: Built-in connection pooling

On 20.04.2018 12:02, Tatsuo Ishii wrote:

I understand your customers like to have unlimited number of
connections. But my customers do not. (btw, even with normal
PostgreSQL, some of my customers are happily using over 1k, even 5k
max_connections).

If you have limited number of client, then you do not need pooling at all.
With the only one exception if clients for some reasons do not want to
keep connections to database server and
prefer to establish connection on demand and disconnect as soon as possible.
But IMHO in most cases it meas bad design of client application, because
establishing connection (even with connection pooler) is quite expensive
operation.
The primary idea and main benefit of built-in connection pooler is to

support session semantic with limited number of backends.

I am confused. If so, why do you want to push statement based or
transaction based built-in connection pooler?

I want to provide session semantic but do not start dedicated backend
for each session.
Transaction level rescheduling (rather than statement level resheduling)
is used to avoid complexity with storing/restoring transaction context
and maintaining locks.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#87Craig Ringer
craig.ringer@2ndquadrant.com
In reply to: Andres Freund (#74)
Re: Built-in connection pooling

On Fri., 20 Apr. 2018, 06:59 Andres Freund, <andres@anarazel.de> wrote:

On 2018-04-19 15:01:24 -0400, Tom Lane wrote:

Only after you can say "there's nothing wrong with this that isn't
directly connected to its not being in-core" does it make sense to try
to push the logic into core.

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler
- temporary tables
- prepared statements
- GUCs and other session state

Totally agreed. Poolers can make some limited efforts there, but that's all.

Poolers also have a hard time determining if a query is read-only or
read/write. Wheas Pg its self has a better chance, and we could help it
along with function READONLY attributes if we wanted. This matters
master/standby query routing. Standbys being able to proxy for the master
would be fantastic but isn't practical without some kind of pooler.

I think there's at least one thing that we should attempt to make
easier for external pooler:
- proxy authorization

Yes, very yes. I've raised this before in a limited form - SET SESSION
AURHORIZATION that cannot be reset without a cookie value. But true proxy
auth would be better.

#88Tatsuo Ishii
ishii@sraoss.co.jp
In reply to: Konstantin Knizhnik (#86)
Re: Built-in connection pooling

I understand your customers like to have unlimited number of
connections. But my customers do not. (btw, even with normal
PostgreSQL, some of my customers are happily using over 1k, even 5k
max_connections).

If you have limited number of client, then you do not need pooling at
all.

Still pooler is needed even if the number of connections is low
because connecting to PostgreSQL is very expensive operation as
everybody knows.

BTW, the main reason why Pgpool-II is used is, because it is a pooler,
but query routing: write queies to primary server and read queries to
standbys. This is not possible in bulit-in pooler.

I am confused. If so, why do you want to push statement based or
transaction based built-in connection pooler?

I want to provide session semantic but do not start dedicated backend
for each session.
Transaction level rescheduling (rather than statement level
resheduling) is used to avoid complexity with storing/restoring
transaction context and maintaining locks.

Not sure if it's acceptable for community. Probably many developers
want built-in pooler keeps exactly the same semantics as normal
connections.

Tome Lane wrote:

FWIW, I concur with Heikki's position that we're going to have very high
standards for the transparency of any in-core pooler. Before trying to
propose a patch, it'd be a good idea to try to fix the perceived
shortcomings of some existing external pooler. Only after you can say
"there's nothing wrong with this that isn't directly connected to its
not being in-core" does it make sense to try to push the logic into core.

So I would suggest you to start with session level in-core pooler,
which would be much easier than transaction level pooler to make it
transparent.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

#89Vladimir Borodin
root@simply.name
In reply to: Andres Freund (#74)
Re: Built-in connection pooling

19 апр. 2018 г., в 23:59, Andres Freund <andres@anarazel.de> написал(а):

I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler
- temporary tables
- prepared statements
- GUCs and other session state

+1

I think there's at least one thing that we should attempt to make
easier for external pooler:
- proxy authorization

I suggested it here [1]/messages/by-id/98C8F3EF-52F0-4AF9-BE81-405C15D77DEA@simply.name but fair amount of people argued against it in that thread.

[1]: /messages/by-id/98C8F3EF-52F0-4AF9-BE81-405C15D77DEA@simply.name

--
May the force be with you…
https://simply.name

#90Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#12)
Re: Built-in connection pooling

On Fri, Jan 19, 2018 at 11:59 AM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:

Hmmm, that's unfortunate. I guess you'll have process the startup packet
in the main process, before it gets forked. At least partially.

I'm not keen on a design that would involve doing more stuff in the
postmaster, because that would increase the chances of the postmaster
accidentally dying, which is really bad. I've been thinking about the
idea of having a separate "listener" process that receives
connections, and that the postmaster can restart if it fails. Or
there could even be multiple listeners if needed. When the listener
gets a connection, it hands it off to another process that then "owns"
that connection.

One problem with this is that the process that's going to take over
the connection needs to get started by the postmaster, not the
listener. The listener could signal the postmaster to start it, just
like we do for background workers, but that might add a bit of
latency. So what I'm thinking is that the postmaster could maintain
a small (and configurably-sized) pool of preforked workers. That
might be worth doing independently, as a way to reduce connection
startup latency, although somebody would have to test it to see
whether it really works... a lot of the startup work can't be done
until we know which database the user wants.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#91Robert Haas
robertmhaas@gmail.com
In reply to: Heikki Linnakangas (#64)
Re: Built-in connection pooling

On Wed, Apr 18, 2018 at 9:41 AM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:

Well, may be I missed something, but i do not know how to efficiently
support
1. Temporary tables
2. Prepared statements
3. Sessoin GUCs
with any external connection pooler (with pooling level other than
session).

Me neither. What makes it easier to do these things in an internal
connection pooler? What could the backend do differently, to make these
easier to implement in an external pooler?

I think you are Konstantin are possibly failing to see the big picture
here. Temporary tables, prepared statements, and GUC settings are
examples of session state that users expect will be preserved for the
lifetime of a connection and not beyond; all session state, of
whatever kind, has the same set of problems. A transparent connection
pooling experience means guaranteeing that no such state vanishes
before the user ends the current session, and also that no such state
established by some other session becomes visible in the current
session. And we really need to account for *all* such state, not just
really big things like temporary tables and prepared statements and
GUCs but also much subtler things such as the state of the PRNG
established by srandom().

This is really very similar to the problem that parallel query has
when spinning up new worker backends. As far as possible, we want the
worker backends to have the same state as the original backend.
However, there's no systematic way of being sure that every relevant
backend-private global, including perhaps globals added by loadable
modules, is in exactly the same state. For parallel query, we solved
that problem by copying a bunch of things that we knew were
commonly-used (cf. parallel.c) and by requiring functions to be
labeled as parallel-restricted if they rely on anything other state.
The problem for connection pooling is much harder. If you only ever
ran parallel-safe functions throughout the lifetime of a session, then
you would know that the session has no "hidden state" other than what
parallel.c already knows about (except for any functions that are
mislabeled, but we can say that's the user's fault for mislabeling
them). But as soon as you run even one parallel-restricted or
parallel-unsafe function, there might be a global variable someplace
that holds arbitrary state which the core system won't know anything
about. If you want to have some other process take over that session,
you need to copy that state to the new process; if you want to reuse
the current process for a new session, you need to clear that state.
Since you don't know it exists or where to find it, and since the code
to copy and/or clear it might not even exist, you can't.

In other words, transparent connection pooling is going to require
some new mechanism, which third-party code will have to know about,
for tracking every last bit of session state that might need to be
preserved or cleared. That's going to be a big project. Maybe some
of that can piggyback on existing infrastructure like
InvalidateSystemCaches(), but there's probably still a ton of ad-hoc
state to deal with. And no out-of-core pooler has a chance of
handling all that stuff correctly; an in-core pooler will be able to
do so only with a lot of work.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#92Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#84)
Re: Built-in connection pooling

On Fri, Apr 20, 2018 at 11:40:59AM +0300, Konstantin Knizhnik wrote:

Sorry, may we do not understand each other.
There are the following facts:
1. There are some entities in Postgres which are local to a backend:
temporary tables, GUCs, prepared statement, relation and catalog caches,...
2. Postgres doesn't "like"� larger number of backends. Even only few of them
are actually active. Large number of backends means large procarray, large
snapshots,...
Please refere to my measurement at the beginning of this thread which
illustrate how performance of Pastgres degrades with increasing number of
backends.

So, instead of trying to multiplex multiple sessions in a single
operating system process, why don't we try to reduce the overhead of
idle sessions that each have an operating system process? We already
use procArray to reduce the number of _assigned_ PGPROC entries we have
to scan. Why can't we create another array that only contains _active_
sessions, i.e. those not in a transaction. In what places can procArray
scans be changed to use this new array?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#93Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#92)
Re: Built-in connection pooling

On Mon, Apr 23, 2018 at 7:59 PM, Bruce Momjian <bruce@momjian.us> wrote:

So, instead of trying to multiplex multiple sessions in a single
operating system process, why don't we try to reduce the overhead of
idle sessions that each have an operating system process? We already
use procArray to reduce the number of _assigned_ PGPROC entries we have
to scan. Why can't we create another array that only contains _active_
sessions, i.e. those not in a transaction. In what places can procArray
scans be changed to use this new array?

There are lots of places where scans would benefit, but the cost of
maintaining the new array would be very high in some workloads, so I
don't think you'd come out ahead overall. Feel free to code it up and
test it, though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#94Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#93)
Re: Built-in connection pooling

On Mon, Apr 23, 2018 at 09:47:07PM -0400, Robert Haas wrote:

On Mon, Apr 23, 2018 at 7:59 PM, Bruce Momjian <bruce@momjian.us> wrote:

So, instead of trying to multiplex multiple sessions in a single
operating system process, why don't we try to reduce the overhead of
idle sessions that each have an operating system process? We already
use procArray to reduce the number of _assigned_ PGPROC entries we have
to scan. Why can't we create another array that only contains _active_
sessions, i.e. those not in a transaction. In what places can procArray
scans be changed to use this new array?

There are lots of places where scans would benefit, but the cost of
maintaining the new array would be very high in some workloads, so I
don't think you'd come out ahead overall. Feel free to code it up and
test it, though.

Well, it would be nice if we new exactly which scans are slow for a
large number of idle sessions, and then we could determine what criteria
for that array would be beneficial --- that seems like the easiest place
to start.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#95Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#94)
Re: Built-in connection pooling

On Mon, Apr 23, 2018 at 09:53:37PM -0400, Bruce Momjian wrote:

On Mon, Apr 23, 2018 at 09:47:07PM -0400, Robert Haas wrote:

On Mon, Apr 23, 2018 at 7:59 PM, Bruce Momjian <bruce@momjian.us> wrote:

So, instead of trying to multiplex multiple sessions in a single
operating system process, why don't we try to reduce the overhead of
idle sessions that each have an operating system process? We already
use procArray to reduce the number of _assigned_ PGPROC entries we have
to scan. Why can't we create another array that only contains _active_
sessions, i.e. those not in a transaction. In what places can procArray
scans be changed to use this new array?

There are lots of places where scans would benefit, but the cost of
maintaining the new array would be very high in some workloads, so I
don't think you'd come out ahead overall. Feel free to code it up and
test it, though.

Well, it would be nice if we new exactly which scans are slow for a
large number of idle sessions, and then we could determine what criteria
for that array would be beneficial --- that seems like the easiest place
to start.

I guess my point is if we are looking at trying to store all the session
state in shared memory, so any process can resume it, we might as well
see if we can find a way to more cheaply store the state in an idle
process.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#96Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Robert Haas (#90)
Re: Built-in connection pooling

On 23.04.2018 21:56, Robert Haas wrote:

On Fri, Jan 19, 2018 at 11:59 AM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:

Hmmm, that's unfortunate. I guess you'll have process the startup packet
in the main process, before it gets forked. At least partially.

I'm not keen on a design that would involve doing more stuff in the
postmaster, because that would increase the chances of the postmaster
accidentally dying, which is really bad. I've been thinking about the
idea of having a separate "listener" process that receives
connections, and that the postmaster can restart if it fails. Or
there could even be multiple listeners if needed. When the listener
gets a connection, it hands it off to another process that then "owns"
that connection.

One problem with this is that the process that's going to take over
the connection needs to get started by the postmaster, not the
listener. The listener could signal the postmaster to start it, just
like we do for background workers, but that might add a bit of
latency. So what I'm thinking is that the postmaster could maintain
a small (and configurably-sized) pool of preforked workers. That
might be worth doing independently, as a way to reduce connection
startup latency, although somebody would have to test it to see
whether it really works... a lot of the startup work can't be done
until we know which database the user wants.

I agree that starting separate "listener" process(es) is the most
flexible and scalable solution.
I have not implemented this apporach due to the problems with forking
new backend you have mentioned.
But certainly it can be addressed.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#97Merlin Moncure
mmoncure@gmail.com
In reply to: Robert Haas (#91)
Re: Built-in connection pooling

On Mon, Apr 23, 2018 at 3:14 PM, Robert Haas <robertmhaas@gmail.com> wrote:

In other words, transparent connection pooling is going to require
some new mechanism, which third-party code will have to know about,
for tracking every last bit of session state that might need to be
preserved or cleared. That's going to be a big project. Maybe some
of that can piggyback on existing infrastructure like
InvalidateSystemCaches(), but there's probably still a ton of ad-hoc
state to deal with. And no out-of-core pooler has a chance of
handling all that stuff correctly; an in-core pooler will be able to
do so only with a lot of work.

Why does it have to be completely transparent? As long as the feature
is optional (say, a .conf setting) the tradeoffs can be managed. It's
a reasonable to expect to exchange some functionality for pooling;
pgbouncer provides a 'release' query (say, DISCARD ALL) to be called
upon release back to the pool. Having session state objects (not all
of which we are talking about; advisory locks and notifications
deserve consideration) 'just work' would be wonderful but ought not to
hold up other usages of the feature.

merlin

#98Adam Brusselback
adambrusselback@gmail.com
In reply to: Merlin Moncure (#97)
Re: Built-in connection pooling

On Tue, Apr 24, 2018 at 9:52 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

Why does it have to be completely transparent? As long as the feature
is optional (say, a .conf setting) the tradeoffs can be managed. It's
a reasonable to expect to exchange some functionality for pooling;
pgbouncer provides a 'release' query (say, DISCARD ALL) to be called
upon release back to the pool. Having session state objects (not all
of which we are talking about; advisory locks and notifications
deserve consideration) 'just work' would be wonderful but ought not to
hold up other usages of the feature.

merlin

Just my $0.02, I wouldn't take advantage of this feature as a user
without it being transparent.
I use too many of the features which would be affected by not
maintaining the state. That's one of the reasons I only use an
external JDBC pooler for my primary application, and plain ole
connections for all of my secondary services which need to just work
with temp tables, session variables, etc. I'd love it if I could use
one of those poolers (or a built in one) which just magically
increased performance for starting up connections, lowered the
overhead of idle sessions, and didn't mess with session state.

Short of that, i'll take the hit in performance and using more memory
than I should with direct connections for now.

Not sure how other users feel, but that's where i'm sitting for my use case.

#99Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Robert Haas (#91)
Re: Built-in connection pooling

On 23.04.2018 23:14, Robert Haas wrote:

On Wed, Apr 18, 2018 at 9:41 AM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:

Well, may be I missed something, but i do not know how to efficiently
support
1. Temporary tables
2. Prepared statements
3. Sessoin GUCs
with any external connection pooler (with pooling level other than
session).

Me neither. What makes it easier to do these things in an internal
connection pooler? What could the backend do differently, to make these
easier to implement in an external pooler?

I think you are Konstantin are possibly failing to see the big picture
here. Temporary tables, prepared statements, and GUC settings are
examples of session state that users expect will be preserved for the
lifetime of a connection and not beyond; all session state, of
whatever kind, has the same set of problems. A transparent connection
pooling experience means guaranteeing that no such state vanishes
before the user ends the current session, and also that no such state
established by some other session becomes visible in the current
session. And we really need to account for *all* such state, not just
really big things like temporary tables and prepared statements and
GUCs but also much subtler things such as the state of the PRNG
established by srandom().

It is not quit true thst I have not realized this issues.
In addition to connection pooling, I have also implemented pthread
version of Postgres and their static variables are replaced with
thread-local variables which let each thread use its own set of variables.

Unfortunately in connection pooling this approach can not be used.
But I think that performing scheduling at transaction level will
eliminate the problem with static variables in most cases.
My expectation is that there are very few of them which has
session-level lifetime.
Unfortunately it is not so easy to locate all such places. Once such
variables are located, them can be saved in session context and restored
on reschedule.

More challenging thing is to handle system static variables which which
can not be easily saved/restored. You example with srandom is exactly
such case.
Right now I do not know any efficient way to suspend/resume
pseudo-random sequence.
But frankly speaking, that such behaviour of random is completely not
acceptable and built-in session pool unusable.

This is really very similar to the problem that parallel query has
when spinning up new worker backends. As far as possible, we want the
worker backends to have the same state as the original backend.
However, there's no systematic way of being sure that every relevant
backend-private global, including perhaps globals added by loadable
modules, is in exactly the same state. For parallel query, we solved
that problem by copying a bunch of things that we knew were
commonly-used (cf. parallel.c) and by requiring functions to be
labeled as parallel-restricted if they rely on anything other state.
The problem for connection pooling is much harder. If you only ever
ran parallel-safe functions throughout the lifetime of a session, then
you would know that the session has no "hidden state" other than what
parallel.c already knows about (except for any functions that are
mislabeled, but we can say that's the user's fault for mislabeling
them). But as soon as you run even one parallel-restricted or
parallel-unsafe function, there might be a global variable someplace
that holds arbitrary state which the core system won't know anything
about. If you want to have some other process take over that session,
you need to copy that state to the new process; if you want to reuse
the current process for a new session, you need to clear that state.
Since you don't know it exists or where to find it, and since the code
to copy and/or clear it might not even exist, you can't.

In other words, transparent connection pooling is going to require
some new mechanism, which third-party code will have to know about,
for tracking every last bit of session state that might need to be
preserved or cleared. That's going to be a big project. Maybe some
of that can piggyback on existing infrastructure like
InvalidateSystemCaches(), but there's probably still a ton of ad-hoc
state to deal with. And no out-of-core pooler has a chance of
handling all that stuff correctly; an in-core pooler will be able to
do so only with a lot of work.

I think that situation with parallel executors are slightly different:
in this case several backends perform execution of the same query.
So them really need to somehow share/synchronize state of static variables.
But in case of connection pooling only one transaction is executed by
backend at each moment of time. And there should be no problems with
static variables unless them cross transaction boundary. But I do not
think that there are many such variables.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#100Christophe Pettus
xof@thebuild.com
In reply to: Merlin Moncure (#97)
Re: Built-in connection pooling

On Apr 24, 2018, at 06:52, Merlin Moncure <mmoncure@gmail.com> wrote:
Why does it have to be completely transparent?

Well, we have non-transparent connection pooling now, in the form of pgbouncer, and the huge fleet of existing application-stack poolers. The main reason to move it into core is to avoid the limitations that a non-core pooler has.

--
-- Christophe Pettus
xof@thebuild.com

#101Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Christophe Pettus (#100)
Re: Built-in connection pooling

On 25.04.2018 08:34, Christophe Pettus wrote:

On Apr 24, 2018, at 06:52, Merlin Moncure <mmoncure@gmail.com> wrote:
Why does it have to be completely transparent?

Well, we have non-transparent connection pooling now, in the form of pgbouncer, and the huge fleet of existing application-stack poolers. The main reason to move it into core is to avoid the limitations that a non-core pooler has.

What do we mean by "completely transparent"? If complete transparency
means that polled sessions behaves exactly the same as normal session in
dedicated backend then it will be really difficult to achieve, taken in
account all error handling nuances, issue with srandom, and may be some
other contexts with session lifetime...

But I start development of built-in connection poller because of our
customer's requests.
For example 1C clients never drop connections and 1C application is
widely using temporary tables.
So them can not use pgbouncer  and number of clients can be very larger
(thousands).
Built-in connection pooling will satisfy their needs. And the fact that
random() in polled connection will return different values is absolutely
unimportant for them.

So my point of view is the following:
1. Support of temporary tables in pooled sessions is important because
them are widely used in many applications.
2. Support of prepared statements in polled sessions is also useful,
because it allows to increase performance up to two times.
3. Support of GUCs is also required, because there are many things:
locale, date format, timezone which are set by client application using
GUCs.

Other things seems to be less important.
If there are some static variables (not associated with GUCs) with
session (backend) lifetime, then them can be moved to session context.
I just do not know some variables.

--
-- Christophe Pettus
xof@thebuild.com

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#102Merlin Moncure
mmoncure@gmail.com
In reply to: Christophe Pettus (#100)
Re: Built-in connection pooling

On Wed, Apr 25, 2018 at 12:34 AM, Christophe Pettus <xof@thebuild.com> wrote:

On Apr 24, 2018, at 06:52, Merlin Moncure <mmoncure@gmail.com> wrote:
Why does it have to be completely transparent?

The main reason to move it into core is to avoid the limitations that a non-core pooler has.

The limitations headaches that I suffer with pgbouncer project (which
I love and use often) are mainly administrative and performance
related, not lack of session based server features. Applications that
operate over a very large amount of virtual connections or engage a
very high level of small transaction traffic are going to avoid
session based features for a lot of other reasons anyways, at least in
my experience. Probably the most useful feature I miss is async
notifications, so much so that at one point we hacked pgbouncer to
support them. Point being, full transparency is nice, but there are
workarounds for most of the major issues and there are a lot of side
channel benefits to making your applications 'stateless' (defined as
state in application or database but not in between).

Absent any other consideration, OP has proven to me that there is
massive potential performance gains possible from moving the pooling
mechanism into the database core process, and I'm already very excited
about not having an extra server process to monitor and worry about.
Tracking session state out of process seems pretty complicated and
would probably add add complexity or overhead to multiple internal
systems. If we get that tor free I'd be all for it but reading
Robert's email I'm skeptical there are easy wins here. So +1 for
further R&D and -1 for holding things up based on full
transparency...no harm in shooting for that, but let's look at things
from a cost/benefit perspective (IMO).

merlin

#103Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#102)
Re: Built-in connection pooling

On 25.04.2018 17:00, Merlin Moncure wrote:

On Wed, Apr 25, 2018 at 12:34 AM, Christophe Pettus <xof@thebuild.com> wrote:

On Apr 24, 2018, at 06:52, Merlin Moncure <mmoncure@gmail.com> wrote:
Why does it have to be completely transparent?

The main reason to move it into core is to avoid the limitations that a non-core pooler has.

The limitations headaches that I suffer with pgbouncer project (which
I love and use often) are mainly administrative and performance
related, not lack of session based server features. Applications that
operate over a very large amount of virtual connections or engage a
very high level of small transaction traffic are going to avoid
session based features for a lot of other reasons anyways, at least in
my experience. Probably the most useful feature I miss is async
notifications, so much so that at one point we hacked pgbouncer to
support them. Point being, full transparency is nice, but there are
workarounds for most of the major issues and there are a lot of side
channel benefits to making your applications 'stateless' (defined as
state in application or database but not in between).

Absent any other consideration, OP has proven to me that there is
massive potential performance gains possible from moving the pooling
mechanism into the database core process, and I'm already very excited
about not having an extra server process to monitor and worry about.
Tracking session state out of process seems pretty complicated and
would probably add add complexity or overhead to multiple internal
systems. If we get that tor free I'd be all for it but reading
Robert's email I'm skeptical there are easy wins here. So +1 for
further R&D and -1 for holding things up based on full
transparency...no harm in shooting for that, but let's look at things
from a cost/benefit perspective (IMO).

merlin

I did more research and find several other think which will not work
with current built-in connection pooling implementation.
One you have mentioned: notification mechanism. Another one is advisory
locks. Right now I have now idea how to support them for pooled sessions.
But I will think about it. But IMHO neither notifications, neither
advisory locks are so widely used, comparing with temporary tables and
prepared statements...

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#104Christophe Pettus
xof@thebuild.com
In reply to: Merlin Moncure (#102)
Re: Built-in connection pooling

On Apr 25, 2018, at 07:00, Merlin Moncure <mmoncure@gmail.com> wrote:
The limitations headaches that I suffer with pgbouncer project (which
I love and use often) are mainly administrative and performance
related, not lack of session based server features.

For me, the most common issue I run into with pgbouncer (after general administrative overhead of having another moving part) is that it works at cross purposes with database-based sharding, as well as useful role and permissions scheme. Since each server connection is specific to a database/role pair, you are left with some unappealing options to handle that in a pooling environment.

The next most common problem are prepared statements breaking, which certainly qualifies as a session-level feature.
--
-- Christophe Pettus
xof@thebuild.com

#105Merlin Moncure
mmoncure@gmail.com
In reply to: Christophe Pettus (#104)
Re: Built-in connection pooling

On Wed, Apr 25, 2018 at 9:43 AM, Christophe Pettus <xof@thebuild.com> wrote:

On Apr 25, 2018, at 07:00, Merlin Moncure <mmoncure@gmail.com> wrote:
The limitations headaches that I suffer with pgbouncer project (which
I love and use often) are mainly administrative and performance
related, not lack of session based server features.

For me, the most common issue I run into with pgbouncer (after general administrative overhead of having another moving part) is that it works at cross purposes with database-based sharding, as well as useful role and permissions scheme. Since each server connection is specific to a database/role pair, you are left with some unappealing options to handle that in a pooling environment.

Would integrated pooling help the sharding case (genuinely curious)?
I don't quite have my head around the issue. I've always wanted
pgbouncer to be able to do things like round robin queries to
non-sharded replica for simple load balancing but it doesn't (yet)
have that capability. That type of functionality would not fit into
in in-core pooler AIUI. Totally agree that the administrative
benefits (user/role/.conf/etc/etc) is a huge win.

The next most common problem are prepared statements breaking, which certainly qualifies as a session-level feature.

Yep. The main workaround today is to disable them. Having said that,
it's not that difficult to imagine hooking prepared statement creation
to a backend starting up (feature: run X,Y,Z SQL before running user
queries). This might be be less effort than, uh, moving backend
session state to a shareable object. I'll go further; managing cache
memory consumption (say for pl/pgsql cached plans) is a big deal for
certain workloads. The only really effective way to deal with that
is to manage the server connection count and/or recycle server
connections on intervals. Using pgbouncer to control backend count is
a very effective way to deal with this problem and allowing
virtualized connections to each mange there independent cache would be
a step in the opposite direction. I very much like having control so
that I have exactly 8 backends for my 8 core server with 8 copies of
cache.

Advisory locks are a completely separate problem. I suspect they
might be used more than you realize, and they operate against a very
fundamental subsystem of the database: the locking engine. I'm
struggling as to why we would take another approach than 'don't use
the non-xact variants of them in a pooling environment'.

merlin

#106Robert Haas
robertmhaas@gmail.com
In reply to: Konstantin Knizhnik (#99)
Re: Built-in connection pooling

On Tue, Apr 24, 2018 at 1:00 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

My expectation is that there are very few of them which has session-level
lifetime.
Unfortunately it is not so easy to locate all such places. Once such
variables are located, them can be saved in session context and restored on
reschedule.

The difficulty of finding them all is really the problem. If we had a
reliable way to list everything that needs to be moved into session
state, then we could try to come up with a design to do that.
Otherwise, we're just swatting issues one by one and I bet we're
missing quite a few.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#107Robert Haas
robertmhaas@gmail.com
In reply to: Merlin Moncure (#102)
Re: Built-in connection pooling

On Wed, Apr 25, 2018 at 10:00 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

systems. If we get that tor free I'd be all for it but reading
Robert's email I'm skeptical there are easy wins here. So +1 for
further R&D and -1 for holding things up based on full
transparency...no harm in shooting for that, but let's look at things
from a cost/benefit perspective (IMO).

If we could look at a patch and say "here are the cases that this
patch doesn't handle", then we could perhaps decide "we're OK with
that, let's ship the feature and document the limitations". But right
now it seems to me that we're looking at a feature where no really
systematic effort has been made to list all of the potential failure
modes, and I'm definitely not on board with the idea of shipping
something with a list of cases that are known to work and an unknown
list of failure modes. Konstantin has fixed things here and there,
but we don't know how much more there is and don't have a
well-designed plan to find all such things.

Also, I think it's worth considering that the kinds of failures users
will get out of anything that's not handled are really the worst kind.
If you have an application that relies on session state other than
what his patch knows how to preserve, your application will appear to
work in light testing because your connection won't actually be
swapped out underneath you -- and then fail unpredictably in
production when such swapping occurs. There will be no clear way to
tell which error messages or behavior differences are due to
limitations of the proposed feature, which ones are due to defects in
the application, and which ones might be due to PostgreSQL bugs.
They'll all look the same, and even experienced PG hackers won't
easily be able to tell whether a message saying "cursor XYZ doesn't
exist" (or whatever the case is specifically) is because the
application didn't create that cursor and nevertheless tried to use
it, or whether it's because the connection pooling facility silently
through it out. All of that sounds to me like it's well below the
standard I'd expect for a core feature.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#108Merlin Moncure
mmoncure@gmail.com
In reply to: Robert Haas (#107)
Re: Built-in connection pooling

On Wed, Apr 25, 2018 at 2:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Apr 25, 2018 at 10:00 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

systems. If we get that tor free I'd be all for it but reading
Robert's email I'm skeptical there are easy wins here. So +1 for
further R&D and -1 for holding things up based on full
transparency...no harm in shooting for that, but let's look at things
from a cost/benefit perspective (IMO).

Also, I think it's worth considering that the kinds of failures users
will get out of anything that's not handled are really the worst kind.
If you have an application that relies on session state other than
what his patch knows how to preserve, your application will appear to
work in light testing because your connection won't actually be
swapped out underneath you -- and then fail unpredictably in
production when such swapping occurs. There will be no clear way to
tell which error messages or behavior differences are due to
limitations of the proposed feature, which ones are due to defects in
the application, and which ones might be due to PostgreSQL bugs.
They'll all look the same, and even experienced PG hackers won't

Connection pooling is not a new phenomenon, and many stacks (in
particular java) tend to pool connection by default. All of the
problems we discuss here for the most part affect competitive
solutions and I humbly submit the tradeoffs are _very_ widely
understood. FWICT we get occasional reports that are simply and
clearly answered. I guess there are some people dumb enough to flip
GUC settings involving seemingly important things in production
without testing or reading any documentation or the innumerable
articles and blogs that will pop up...hopefully they are self
selecting out of the industry :-).

Looking at pgbouncer, they produce a chart that says, 'these features
don't work, and please consider that before activating this feature'
(https://wiki.postgresql.org/wiki/PgBouncer#Feature_matrix_for_pooling_modes)
and that ought to be entirely sufficient to avoid that class of
problems. This is very clear and simple. The main gripes with
pgbouncer FWICT were relating to the postgres JDBC driver's
unavoidable tendency (later fixed) to prepare 'BEGIN' causing various
problems, which was a bug really (in the JDBC driver) which did in
fact spill into this list.

For this feature to be really attractive we'd want to simultaneously
allow pooled and non-pooled connections on different ports, or even
multiple pools (say, for different applications). Looking at things
from your perspective, we might want to consider blocking (with error)
features that are not 'pooling compatible' if they arrive through a
pooled connection.

merlin

#109Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#106)
Re: Built-in connection pooling

On Wed, Apr 25, 2018 at 03:42:31PM -0400, Robert Haas wrote:

The difficulty of finding them all is really the problem. If we had a
reliable way to list everything that needs to be moved into session
state, then we could try to come up with a design to do that.
Otherwise, we're just swatting issues one by one and I bet we're
missing quite a few.

Hm? We already know about the reset value of a parameter in
pg_settings, which points out to the value which would be used if reset
in a session, even after ebeing reloaded. If you compare it with the
actual setting value, wouldn't that be enough to know which parameters
have been changed at session-level by an application once connecting?
So you can pull out a list using such comparisons. The context a
parameter is associated to can also help.
--
Michael

#110Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#105)
Re: Built-in connection pooling

On 25.04.2018 20:02, Merlin Moncure wrote:

Would integrated pooling help the sharding case (genuinely curious)?
I don't quite have my head around the issue. I've always wanted
pgbouncer to be able to do things like round robin queries to
non-sharded replica for simple load balancing but it doesn't (yet)
have that capability. That type of functionality would not fit into
in in-core pooler AIUI. Totally agree that the administrative
benefits (user/role/.conf/etc/etc) is a huge win.

Yes, pgbpouncer is not intended to balance workload.
You should use ha-proxy or pg-pool. libpq now allow tp specify multiple
URLs, but unfortunately right now libpq is not able to perform load
balancing.
I  do not understand how it is related with integrating connection pooling.
Such pooler definitely shound be external if you want to scatter queries
between different nodes.

The next most common problem are prepared statements breaking, which certainly qualifies as a session-level feature.

Yep. The main workaround today is to disable them. Having said that,
it's not that difficult to imagine hooking prepared statement creation
to a backend starting up (feature: run X,Y,Z SQL before running user
queries).

Sorry, I do not completely understand your idea.
Yes, it is somehow possible to simulate session semantic by prepending
all session specific commands (mostly setting GUCs) to each SQL statements.
But it doesn't work for prepared statements: the idea of prepared
statements is that compilation of statement should be done only once.

This might be be less effort than, uh, moving backend
session state to a shareable object. I'll go further; managing cache
memory consumption (say for pl/pgsql cached plans) is a big deal for
certain workloads. The only really effective way to deal with that
is to manage the server connection count and/or recycle server
connections on intervals. Using pgbouncer to control backend count is
a very effective way to deal with this problem and allowing
virtualized connections to each mange there independent cache would be
a step in the opposite direction. I very much like having control so
that I have exactly 8 backends for my 8 core server with 8 copies of
cache.

Database performance is mostly limited by disk, so optimal number of
backends may be different from number of cores.
But certainly possibility to launch "optimal" number of backends is one
of the advantages of builtin session pooling.

Advisory locks are a completely separate problem. I suspect they
might be used more than you realize, and they operate against a very
fundamental subsystem of the database: the locking engine. I'm
struggling as to why we would take another approach than 'don't use
the non-xact variants of them in a pooling environment'.

merlin

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#111Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Michael Paquier (#109)
Re: Built-in connection pooling

On 26.04.2018 05:09, Michael Paquier wrote:

On Wed, Apr 25, 2018 at 03:42:31PM -0400, Robert Haas wrote:

The difficulty of finding them all is really the problem. If we had a
reliable way to list everything that needs to be moved into session
state, then we could try to come up with a design to do that.
Otherwise, we're just swatting issues one by one and I bet we're
missing quite a few.

Hm? We already know about the reset value of a parameter in
pg_settings, which points out to the value which would be used if reset
in a session, even after ebeing reloaded. If you compare it with the
actual setting value, wouldn't that be enough to know which parameters
have been changed at session-level by an application once connecting?
So you can pull out a list using such comparisons. The context a
parameter is associated to can also help.
--
Michael

Sorry, may be I do not understand you correctly. But GUCs are already
handled by builtin connection pooler.
It is done at guc.c level, so doesn't matter how GUC variable is
changed. All modified GUCs are saved into the session context and
restored on reschedule.

But there are some other static variables which are not related with
GUCs. Most of them are really associated with backend, not with session.
So them should not be handled by reschedule.
But there may be some variables which are intended to be session
specific. And locating this variables is really non trivial task.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#112Merlin Moncure
mmoncure@gmail.com
In reply to: Konstantin Knizhnik (#110)
Re: Built-in connection pooling

On Thu, Apr 26, 2018 at 6:04 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 25.04.2018 20:02, Merlin Moncure wrote:

Yep. The main workaround today is to disable them. Having said that,
it's not that difficult to imagine hooking prepared statement creation
to a backend starting up (feature: run X,Y,Z SQL before running user
queries).

Sorry, I do not completely understand your idea.
Yes, it is somehow possible to simulate session semantic by prepending all
session specific commands (mostly setting GUCs) to each SQL statements.
But it doesn't work for prepared statements: the idea of prepared statements
is that compilation of statement should be done only once.

The idea is that you have arbitrary SQL that runs when after the
backend (postgres binary) is forked from postmaster. This would be an
ideal place to introduce prepared statements in a way that is pooling
compatible; you still couldn't PREPARE from the application but you'd
be free to call already prepared statements (via SQL level EXECUTE or
libpq PQexecPrepared()). Of course, if somebody throws a DEALLOCATE
or DISCARD ALL, or issues a problematic DROP x CASCADE, you'd be in
trouble but that'a not a big deal IMO because you can control for
those things in the application.

Database performance is mostly limited by disk, so optimal number of
backends may be different from number of cores.
But certainly possibility to launch "optimal" number of backends is one of
the advantages of builtin session pooling.

Sure, but some workloads are cpu limited (all- or mostly- read with
data < memory, or very complex queries on smaller datasets). So we
would measure configure based one expectations exactly as is done
today with pgbouncer. This is a major feature of pgbouncer: being
able to _reduce_ the number of session states relative to the number
of connections is an important feature; it isolates your database from
various unpleasant failure modes such as runaway memory consumption.

Anyways, I'm looking at your patch. I see you've separated the client
connection count ('sessions') from the server backend instances
('backends') in the GUC. Questions:
*) Should non pooled connections be supported simultaneously with
pooled connections?
*) Should there be multiple pools with independent configurations (yes, please)?
*) How are you pinning client connections to an application managed
transaction? (IMNSHO, this feature is useless without being able to do
that)

FYI, it's pretty clear you've got a long road building consensus and
hammering out a reasonable patch through the community here. Don't
get discouraged -- there is value here, but it's going to take some
work.

merlin

#113Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#109)
Re: Built-in connection pooling

On Wed, Apr 25, 2018 at 10:09 PM, Michael Paquier <michael@paquier.xyz> wrote:

On Wed, Apr 25, 2018 at 03:42:31PM -0400, Robert Haas wrote:

The difficulty of finding them all is really the problem. If we had a
reliable way to list everything that needs to be moved into session
state, then we could try to come up with a design to do that.
Otherwise, we're just swatting issues one by one and I bet we're
missing quite a few.

Hm? We already know about the reset value of a parameter in
pg_settings, which points out to the value which would be used if reset
in a session, even after ebeing reloaded. If you compare it with the
actual setting value, wouldn't that be enough to know which parameters
have been changed at session-level by an application once connecting?
So you can pull out a list using such comparisons. The context a
parameter is associated to can also help.

Uh, there's a lot of session backend state other than GUCs. If the
only thing that we needed to worry about were GUCs, this problem would
have been solved years ago.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#114Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#112)
Re: Built-in connection pooling

On 27.04.2018 16:49, Merlin Moncure wrote:

On Thu, Apr 26, 2018 at 6:04 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 25.04.2018 20:02, Merlin Moncure wrote:

Yep. The main workaround today is to disable them. Having said that,
it's not that difficult to imagine hooking prepared statement creation
to a backend starting up (feature: run X,Y,Z SQL before running user
queries).

Sorry, I do not completely understand your idea.
Yes, it is somehow possible to simulate session semantic by prepending all
session specific commands (mostly setting GUCs) to each SQL statements.
But it doesn't work for prepared statements: the idea of prepared statements
is that compilation of statement should be done only once.

The idea is that you have arbitrary SQL that runs when after the
backend (postgres binary) is forked from postmaster. This would be an
ideal place to introduce prepared statements in a way that is pooling
compatible; you still couldn't PREPARE from the application but you'd
be free to call already prepared statements (via SQL level EXECUTE or
libpq PQexecPrepared()). Of course, if somebody throws a DEALLOCATE
or DISCARD ALL, or issues a problematic DROP x CASCADE, you'd be in
trouble but that'a not a big deal IMO because you can control for
those things in the application.

As far as I know in this way prepared statements can be now handled by
pgbounce in transaction/statement pooling mode.
But from my point of view, in most cases this approach is practically
unusable.
It is very hard to predict from the very beginning all statements
applications will want to execute and prepare then at backend start.

Database performance is mostly limited by disk, so optimal number of
backends may be different from number of cores.
But certainly possibility to launch "optimal" number of backends is one of
the advantages of builtin session pooling.

Sure, but some workloads are cpu limited (all- or mostly- read with
data < memory, or very complex queries on smaller datasets). So we
would measure configure based one expectations exactly as is done
today with pgbouncer. This is a major feature of pgbouncer: being
able to _reduce_ the number of session states relative to the number
of connections is an important feature; it isolates your database from
various unpleasant failure modes such as runaway memory consumption.

Anyways, I'm looking at your patch. I see you've separated the client
connection count ('sessions') from the server backend instances
('backends') in the GUC. Questions:
*) Should non pooled connections be supported simultaneously with
pooled connections?
*) Should there be multiple pools with independent configurations (yes, please)?

Right now my prototype supports two modes:
1. All connections are polled.
2. There are several session pools, each bounded to its own port.
Connections to the main Postgres port are normal (dedicated).
Connections to one of session pools port's are redirected to one of the
workers of this page pool.

Please notice, that the last version of connection pooler is in
https://github.com/postgrespro/postgresql.builtin_pool.git repository.

*) How are you pinning client connections to an application managed
transaction? (IMNSHO, this feature is useless without being able to do
that)

Sorry, I do not completely understand the question.
Rescheduling is now done at transaction level - it means that backand
can not be switched to other session until completing current transaction.
The main argument  for transaction level pooling is that it allows not
worry about heavy weight locks, which are associated with procarray entries.

FYI, it's pretty clear you've got a long road building consensus and
hammering out a reasonable patch through the community here. Don't
get discouraged -- there is value here, but it's going to take some
work.

Thank you.
I am absolutely sure that a lot of additional work has to be done before
this prototype may become usable.

merlin

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#115Merlin Moncure
mmoncure@gmail.com
In reply to: Konstantin Knizhnik (#114)
Re: Built-in connection pooling

On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 27.04.2018 16:49, Merlin Moncure wrote:

*) How are you pinning client connections to an application managed
transaction? (IMNSHO, this feature is useless without being able to do
that)

Sorry, I do not completely understand the question.
Rescheduling is now done at transaction level - it means that backand can
not be switched to other session until completing current transaction.
The main argument for transaction level pooling is that it allows not worry
about heavy weight locks, which are associated with procarray entries.

I'm confused here...could be language issues or terminology (I'll look
at your latest code). Here is how I understand things:
Backend=instance of postgres binary
Session=application state within postgres binary (temp tables,
prepared statement etc)
Connection=Client side connection

AIUI (I could certainly be wrong), withing connection pooling, ratio
of backend/session is still 1:1. The idea is that client connections
when they issue SQL to the server reserve a Backend/Session, use it
for the duration of a transaction, and release it when the transaction
resolves. So many client connections share backends. As with
pgbouncer, the concept of session in a traditional sense is not really
defined; session state management would be handled within the
application itself, or within data within tables, but not within
backend private memory. Does that align with your thinking?

merlin

#116Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#115)
Re: Built-in connection pooling

On 27.04.2018 18:33, Merlin Moncure wrote:

On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 27.04.2018 16:49, Merlin Moncure wrote:

*) How are you pinning client connections to an application managed
transaction? (IMNSHO, this feature is useless without being able to do
that)

Sorry, I do not completely understand the question.
Rescheduling is now done at transaction level - it means that backand can
not be switched to other session until completing current transaction.
The main argument for transaction level pooling is that it allows not worry
about heavy weight locks, which are associated with procarray entries.

I'm confused here...could be language issues or terminology (I'll look
at your latest code). Here is how I understand things:
Backend=instance of postgres binary
Session=application state within postgres binary (temp tables,
prepared statement etc)
Connection=Client side connection

Backend is a process, forked by postmaster.

AIUI (I could certainly be wrong), withing connection pooling, ratio
of backend/session is still 1:1. The idea is that client connections
when they issue SQL to the server reserve a Backend/Session, use it
for the duration of a transaction, and release it when the transaction
resolves. So many client connections share backends. As with
pgbouncer, the concept of session in a traditional sense is not really
defined; session state management would be handled within the
application itself, or within data within tables, but not within
backend private memory. Does that align with your thinking?

No. Number of sessions is equal to number of client connections.
So client is not reserving "Backend/Session" as it happen in pgbouncer.
One backend keeps multiple sessions. And for each session it maintains
session context which included client's connection.
And it is backend's decision transaction of which client it is going to
execute now.
This is why built-in pooler is able to provide session semantic without
backend/session=1:1 requirement.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#117Merlin Moncure
mmoncure@gmail.com
In reply to: Konstantin Knizhnik (#116)
Re: Built-in connection pooling

On Fri, Apr 27, 2018 at 11:44 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 27.04.2018 18:33, Merlin Moncure wrote:

On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 27.04.2018 16:49, Merlin Moncure wrote:

I'm confused here...could be language issues or terminology (I'll look
at your latest code). Here is how I understand things:
Backend=instance of postgres binary
Session=application state within postgres binary (temp tables,
prepared statement etc)
Connection=Client side connection

Backend is a process, forked by postmaster.

right, we are saying the same thing here.

AIUI (I could certainly be wrong), withing connection pooling, ratio
of backend/session is still 1:1. The idea is that client connections
when they issue SQL to the server reserve a Backend/Session, use it
for the duration of a transaction, and release it when the transaction
resolves. So many client connections share backends. As with
pgbouncer, the concept of session in a traditional sense is not really
defined; session state management would be handled within the
application itself, or within data within tables, but not within
backend private memory. Does that align with your thinking?

No. Number of sessions is equal to number of client connections.
So client is not reserving "Backend/Session" as it happen in pgbouncer.
One backend keeps multiple sessions. And for each session it maintains
session context which included client's connection.
And it is backend's decision transaction of which client it is going to
execute now.
This is why built-in pooler is able to provide session semantic without
backend/session=1:1 requirement.

I see. I'm not so sure that is a good idea in the general sense :(.
Connection sharing sessions is normal and well understood, and we have
tooling to manage that already (DISCARD). Having the session state
abstracted out and pinned to the client connection seems complex and
wasteful, at least sometimes. What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.

merlin

#118Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#117)
Re: Built-in connection pooling

On 27.04.2018 23:43, Merlin Moncure wrote:

On Fri, Apr 27, 2018 at 11:44 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 27.04.2018 18:33, Merlin Moncure wrote:

On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

On 27.04.2018 16:49, Merlin Moncure wrote:

I'm confused here...could be language issues or terminology (I'll look
at your latest code). Here is how I understand things:
Backend=instance of postgres binary
Session=application state within postgres binary (temp tables,
prepared statement etc)
Connection=Client side connection

Backend is a process, forked by postmaster.

right, we are saying the same thing here.

AIUI (I could certainly be wrong), withing connection pooling, ratio
of backend/session is still 1:1. The idea is that client connections
when they issue SQL to the server reserve a Backend/Session, use it
for the duration of a transaction, and release it when the transaction
resolves. So many client connections share backends. As with
pgbouncer, the concept of session in a traditional sense is not really
defined; session state management would be handled within the
application itself, or within data within tables, but not within
backend private memory. Does that align with your thinking?

No. Number of sessions is equal to number of client connections.
So client is not reserving "Backend/Session" as it happen in pgbouncer.
One backend keeps multiple sessions. And for each session it maintains
session context which included client's connection.
And it is backend's decision transaction of which client it is going to
execute now.
This is why built-in pooler is able to provide session semantic without
backend/session=1:1 requirement.

I see. I'm not so sure that is a good idea in the general sense :(.
Connection sharing sessions is normal and well understood, and we have
tooling to manage that already (DISCARD). Having the session state
abstracted out and pinned to the client connection seems complex and
wasteful, at least sometimes. What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.

Yandex's Odyssey is faster version of pgbouncer (supporting
multithreading and many other things).
Why do you need to integrate it in Postgres if you do not want to
preserve session semantic? Just to minimize efforts needed to maintain
extra components?
But in principle, pooler can be distributed as Postgres extension and is
started as background worker.
Will it help to eliminate administration overhead of separate page pool?

In any case, my built-in pooler isoriented on application which needs
session sementic (using temporary tables, GUCs, prepared statements,...)
As I many time mentioned, is is possible to provide it only inside
database, not in some external pooler, doesn't matter which architecture
it has.

merlin

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#119Robert Haas
robertmhaas@gmail.com
In reply to: Merlin Moncure (#117)
Re: Built-in connection pooling

On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure <mmoncure@gmail.com> wrote:

What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.

I have to admit that I find that an amazing statement. Not that
pgbouncer is bad technology, but saying that it does everything
exactly right seems like a vast overstatement. That's like saying
that you don't want running water in your house, just a faster motor
for the bucket you use to draw water from the well.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#120Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Robert Haas (#119)
Re: Built-in connection pooling

On 03.05.2018 20:01, Robert Haas wrote:

On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure <mmoncure@gmail.com> wrote:

What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.

I have to admit that I find that an amazing statement. Not that
pgbouncer is bad technology, but saying that it does everything
exactly right seems like a vast overstatement. That's like saying
that you don't want running water in your house, just a faster motor
for the bucket you use to draw water from the well.

May be if you are engaged in agriculture at your country house, then
having a well with good motor pump is better for watering of plants than
water faucet at your kitchen.
But most of homeowners prefer to open a tapto wash hands rather than
perform some complex manipulations with motor pump.

I absolutely sure that external connection poolers will always have
their niche: them can be used as natural proxy between multiple clients
and DBMS.
Usually HA/load balancing also can be done at this level.

But there are many cases when users just do not want to worry about 
connection pooling: them just has some number of clients (which can be
larger enough and several times larger than optimal number of Postgres
backends) and them want them to access database without introducing some
intermediate layers. In this case built-in connection pooler will be the
ideal solution.

This is from user's point of view. From Postgres developer's point of
view, built-in pooler has  some technical advantages comparing with
external pooler.
Some of this advantages can be eliminated by significant redesign of
Postgres architecture, for example introducing shared cache of prepared
statements...
But in any case, the notion of session context  and possibility to
maintain larger number of opened sessions will always be topical.

Some update on status of built-in connection pooler prototype: I managed
to run regression and isolation tests for pooled connections.
Right now  6 of 185 tests failed are failed for regression tests and 2
of 67 tests failed for isolation tests.
For regression tests result may vary depending on parallel schedule,
because of manipulations with roles/permissions which are not currently
supported.
The best results are for sequential schedule: 5 failed tests: this
failures caused by differences in pg_prepared_statements caused by
"mangled" prepared names.

Failures of isolation tests are caused by unsupported advisory locks.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#121Merlin Moncure
mmoncure@gmail.com
In reply to: Robert Haas (#119)
Re: Built-in connection pooling

On Thu, May 3, 2018 at 12:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure <mmoncure@gmail.com> wrote:

What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.

I have to admit that I find that an amazing statement. Not that
pgbouncer is bad technology, but saying that it does everything
exactly right seems like a vast overstatement. That's like saying
that you don't want running water in your house, just a faster motor
for the bucket you use to draw water from the well.

Well you certainly have a point there; I do have a strong tendency for
overstatement :-).

Let's put it like this: being able to have connections funnel down to
a smaller number of sessions is nice feature. Applications that are
large, complex, or super high volume have a tendency towards stateless
(with respect to the database session) architecture anyways so I tend
not to mind lack of session features when pooling (prepared statements
perhaps being the big outlier here). It really opens up a lot of
scaling avenues. So better a better phrased statement might be, "I
like the way pgbouncer works, in particular transaction mode pooling
from the perspective of the applications using it". Current main pain
points are the previously mentioned administrative headaches and
better performance from a different architecture (pthreads vs libev)
would be nice.

I'm a little skeptical that we're on the right path if we are pushing
a lot of memory consumption into the session level where a session is
pinned all the way back to a client connection. plpgsql function plan
caches can be particularly hungry on memory and since sessions have
their own GUC ISTM each sessions has to have their own set of them
since plans depend on search path GUC which is session specific.
Previous discussions on managing cache memory consumption (I do dimly
recall you making a proposal on that very thing) centrally haven't
gone past panning stages AFAIK.

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

merlin

#122Robert Haas
robertmhaas@gmail.com
In reply to: Merlin Moncure (#121)
Re: Built-in connection pooling

On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

I mean, if you have a large number of sessions open, it's going to
take more memory in any design. If there are multiple sessions per
backend, there may be some possibility to save memory by allocating it
per-backend rather than per-session; it shouldn't be any worse than if
you didn't have pooling in the first place.

However, I think that's probably worrying about the wrong end of the
problem first. IMHO, what we ought to start by doing is considering
what a good architecture for this would be, and how to solve the
general problem of per-backend session state. If we figure that out,
then we could worry about optimizing whatever needs optimizing, e.g.
memory usage.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#123Merlin Moncure
mmoncure@gmail.com
In reply to: Robert Haas (#122)
Re: Built-in connection pooling

On Fri, May 4, 2018 at 2:25 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

I mean, if you have a large number of sessions open, it's going to
take more memory in any design. If there are multiple sessions per
backend, there may be some possibility to save memory by allocating it
per-backend rather than per-session; it shouldn't be any worse than if
you didn't have pooling in the first place.

It is absolutely worse, or at least can be. plpgsql plan caches can
be GUC dependent due to search_path; you might get a different plan
depending on which tables resolve into the function. You might
rightfully regard this as an edge case but there are other 'leakages',
for example, sessions with different planner settings obviously ought
not to share backend plans. Point being, there are many
interdependent things in the session that will make it difficult to
share some portions but not others.

However, I think that's probably worrying about the wrong end of the
problem first. IMHO, what we ought to start by doing is considering
what a good architecture for this would be, and how to solve the
general problem of per-backend session state. If we figure that out,
then we could worry about optimizing whatever needs optimizing, e.g.
memory usage.

Exactly -- being able to manage down resource consumption by
controlling session count is a major feature that ought not to be
overlooked. So I'm kind of signalling that if given a choice between
that (funneling a large pool of connections down to a smaller number
of backends) and externalized shared sessions I'd rather have the
funnel; it solves a number of very important problems with respect to
server robustness. So I'm challenging (in a friendly, curious way) if
breaking session:backend 1:1 is really a good idea. Maybe a
connection pooler implementation can do both of those things or it's
unfair to expect an implementation to do both of them.

merlin

#124Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#121)
Re: Built-in connection pooling

On 04.05.2018 18:22, Merlin Moncure wrote:

On Thu, May 3, 2018 at 12:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure <mmoncure@gmail.com> wrote:

What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.

I have to admit that I find that an amazing statement. Not that
pgbouncer is bad technology, but saying that it does everything
exactly right seems like a vast overstatement. That's like saying
that you don't want running water in your house, just a faster motor
for the bucket you use to draw water from the well.

Well you certainly have a point there; I do have a strong tendency for
overstatement :-).

Let's put it like this: being able to have connections funnel down to
a smaller number of sessions is nice feature. Applications that are
large, complex, or super high volume have a tendency towards stateless
(with respect to the database session) architecture anyways so I tend
not to mind lack of session features when pooling (prepared statements
perhaps being the big outlier here). It really opens up a lot of
scaling avenues. So better a better phrased statement might be, "I
like the way pgbouncer works, in particular transaction mode pooling
from the perspective of the applications using it". Current main pain
points are the previously mentioned administrative headaches and
better performance from a different architecture (pthreads vs libev)
would be nice.

I'm a little skeptical that we're on the right path if we are pushing
a lot of memory consumption into the session level where a session is
pinned all the way back to a client connection. plpgsql function plan
caches can be particularly hungry on memory and since sessions have
their own GUC ISTM each sessions has to have their own set of them
since plans depend on search path GUC which is session specific.
Previous discussions on managing cache memory consumption (I do dimly
recall you making a proposal on that very thing) centrally haven't
gone past panning stages AFAIK.

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

Most of resource consumption is related with backends, not with sessions.
It is first of all catalog and relation caches. If there are thousands
of tables in a databases, then this caches (which size is not limited
now) can grow up to several megabytes.
Taken in account, that at modern SMP systems with hundreds of CPU core
it may be reasonable to spawn hundreds of backends, total memory
footprint of this caches can be very significant.
This is why I think that we should move towards shared caches... But
this trip is not expected to be so easy.

Right now connection pooler allows to handle much more user sessions
than there are active backends.
So it helps to partly solve this problem with resource consumption.
Session context itself is not expected to be very large: changed GUCs +
prepared statements.

I accept your argument about stateless application architecture.
Moreover, this is more or less current state of things: most customers
has to use pgbouncer and so have to prohibit to use in their application
all session specific stuff.
What them are loosing in this case? Prepared statements? But there are
really alternative solutions: autoprepare, shared plan cache,... which
allow to use prepared statements without session context. Temporary
tables, advisory locks,... ?

Temporary tables are actually very "ugly" thing, causing a lot of problems:
- can not be created at hot standby
- cause catalog bloating
- deallocation of large number of temporary table may acquire too much
locks.
...
May be them somehow should be redesigned? For example, have shared
ctalog entry for temporary table, but backend-private content... Or make
it possible to change lifetime of temporary tables from session to
transaction...

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#125Robert Haas
robertmhaas@gmail.com
In reply to: Merlin Moncure (#123)
Re: Built-in connection pooling

On Fri, May 4, 2018 at 5:54 PM, Merlin Moncure <mmoncure@gmail.com> wrote:

I mean, if you have a large number of sessions open, it's going to
take more memory in any design. If there are multiple sessions per
backend, there may be some possibility to save memory by allocating it
per-backend rather than per-session; it shouldn't be any worse than if
you didn't have pooling in the first place.

It is absolutely worse, or at least can be. plpgsql plan caches can
be GUC dependent due to search_path; you might get a different plan
depending on which tables resolve into the function. You might
rightfully regard this as an edge case but there are other 'leakages',
for example, sessions with different planner settings obviously ought
not to share backend plans. Point being, there are many
interdependent things in the session that will make it difficult to
share some portions but not others.

I think you may be misunderstanding my remarks. Suppose I've got 10
real connections multiplexed across 1000 sessions. Barring a
crazy-stupid implementation, that should never use more memory than
1000 completely separate connections. (How could it?) It will of
course use a lot more memory than 10 real connections handling 10
sessions, but that's to be expected.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#126Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Merlin Moncure (#123)
Re: Built-in connection pooling

On 05.05.2018 00:54, Merlin Moncure wrote:

On Fri, May 4, 2018 at 2:25 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

I mean, if you have a large number of sessions open, it's going to
take more memory in any design. If there are multiple sessions per
backend, there may be some possibility to save memory by allocating it
per-backend rather than per-session; it shouldn't be any worse than if
you didn't have pooling in the first place.

It is absolutely worse, or at least can be. plpgsql plan caches can
be GUC dependent due to search_path; you might get a different plan
depending on which tables resolve into the function. You might
rightfully regard this as an edge case but there are other 'leakages',
for example, sessions with different planner settings obviously ought
not to share backend plans. Point being, there are many
interdependent things in the session that will make it difficult to
share some portions but not others.

Right now, in my built-in connection pool implementation there is shared
prepared statements cache for all sessions in one backend,
but actually each session has its own set of prepared statements. I just
append session identifier to prepared statement name to make it unique.
So there is no problem with different execution plans for different
clients caused by specific GUC settings (like enable_seqscan or
max_parallel_workers_per_gather).
But the primary reason for such behavior is to avoid prepared statements
name conflicts between different clients.

From my point of view, there are very few cases when using
client-specific plans has any sense.
In most cases, requirement is quite opposite: I want to be able to
prepare execution plan (using missed in Postgres hints, GUCs, adjusted
statistic,...) which will be used by all clients.
The most natural and convenient way to achieve it is to use shared plan
cache.
But shared plan cache is a different story, not directly related with
connection pooling.\

Point being, there are many interdependent things in the session that will make it difficult to share some portions but not others.

I do not see so much such things... Yes, GUCs can affect behavior within
session. But GUCs are now supported: each session can have its own set
of GUCs.
Prepared plans may depend on GUCs, but them are also private for each
session now. What else?

And in any case, with external connection pooler you are not able to use
session semantic at all: GUCs, prepared statements, temporary table,
advisory locks,...
with built-in connection pooler you can use sessions but with some
restrictions (lack of advisory locks, for example). It is better than
nothing, isn't it?

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#127Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#122)
Re: Built-in connection pooling

On Fri, May 4, 2018 at 03:25:15PM -0400, Robert Haas wrote:

On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

I mean, if you have a large number of sessions open, it's going to
take more memory in any design. If there are multiple sessions per
backend, there may be some possibility to save memory by allocating it
per-backend rather than per-session; it shouldn't be any worse than if
you didn't have pooling in the first place.

However, I think that's probably worrying about the wrong end of the
problem first. IMHO, what we ought to start by doing is considering
what a good architecture for this would be, and how to solve the
general problem of per-backend session state. If we figure that out,
then we could worry about optimizing whatever needs optimizing, e.g.
memory usage.

Yes, I think this matches my previous question --- if we are going to
swap out session state to allow multiple sessions to multiplex in the
same OS process, and that swapping has similar overhead to how the OS
swaps processes, why not just let the OS continue doing the process
swapping.

I think we need to first find out what it is that makes high session
counts slow. For example, if we swap out session state, will we check
the visibility rules for the swapped out session. If not, and that is
what makes swapping session state make Postgres faster, let's just find
a way to skip checking visibility rules for inactive sessions and get
the same benefit more simply.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#128Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#127)
Re: Built-in connection pooling

On Thu, May 17, 2018 at 9:09 PM, Bruce Momjian <bruce@momjian.us> wrote:

However, I think that's probably worrying about the wrong end of the
problem first. IMHO, what we ought to start by doing is considering
what a good architecture for this would be, and how to solve the
general problem of per-backend session state. If we figure that out,
then we could worry about optimizing whatever needs optimizing, e.g.
memory usage.

Yes, I think this matches my previous question --- if we are going to
swap out session state to allow multiple sessions to multiplex in the
same OS process, and that swapping has similar overhead to how the OS
swaps processes, why not just let the OS continue doing the process
swapping.

I think we need to first find out what it is that makes high session
counts slow. For example, if we swap out session state, will we check
the visibility rules for the swapped out session. If not, and that is
what makes swapping session state make Postgres faster, let's just find
a way to skip checking visibility rules for inactive sessions and get
the same benefit more simply.

I don't think we're really in agreement. I am pretty convinced that
we can save a lot of memory and other CPU resources if we don't need a
separate process for each session. I don't have any doubt that the
benefit is there. My point is rather that we need an organized way to
attach the problem of saving and restoring session state, not an
ad-hoc approach for each particular kind of session state.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#129Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#126)
1 attachment(s)
Re: Built-in connection pooling

New versions of built-in connection pool is attached to this mail.
Now client's startup package is received by one of listener workers and
postmater knows database/user name of the recevied connection and so is
able to marshal it to the proper connection pool. Right now SSL is not
supported.

Also I provided some general mechanism for moving static variables to
session context. File
include/storage/sessionvars.h contains list of such variables which are
stored to session context on reschedule.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-10.patchtext/x-patch; name=session_pool-10.patchDownload
diff --git a/contrib/test_decoding/sql/messages.sql b/contrib/test_decoding/sql/messages.sql
index cf3f773..14c4163 100644
--- a/contrib/test_decoding/sql/messages.sql
+++ b/contrib/test_decoding/sql/messages.sql
@@ -23,6 +23,8 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'for
 
 -- test db filtering
 \set prevdb :DBNAME
+show session_pool_size;
+show session_pool_ports;
 \c template1
 
 SELECT 'otherdb1' FROM pg_logical_emit_message(false, 'test', 'otherdb1');
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 5d13e6a..5a93c7e 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -178,7 +178,6 @@ static List *overrideStack = NIL;
  * committed its creation, depending on whether myTempNamespace is valid.
  */
 static Oid	myTempNamespace = InvalidOid;
-
 static Oid	myTempToastNamespace = InvalidOid;
 
 static SubTransactionId myTempNamespaceSubID = InvalidSubTransactionId;
@@ -193,6 +192,7 @@ char	   *namespace_search_path = NULL;
 /* Local functions */
 static void recomputeNamespacePath(void);
 static void InitTempTableNamespace(void);
+static Oid  GetTempTableNamespace(void);
 static void RemoveTempRelations(Oid tempNamespaceId);
 static void RemoveTempRelationsCallback(int code, Datum arg);
 static void NamespaceCallback(Datum arg, int cacheid, uint32 hashvalue);
@@ -460,9 +460,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (strcmp(newRelation->schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(newRelation->schemaname, false);
@@ -471,9 +469,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 	else if (newRelation->relpersistence == RELPERSISTENCE_TEMP)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 	else
 	{
@@ -482,8 +478,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -2921,9 +2916,7 @@ LookupCreationNamespace(const char *nspname)
 	if (strcmp(nspname, "pg_temp") == 0)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 
 	namespaceId = get_namespace_oid(nspname, false);
@@ -2986,9 +2979,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (strcmp(schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(schemaname, false);
@@ -3001,8 +2992,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -3254,16 +3244,28 @@ int
 GetTempNamespaceBackendId(Oid namespaceId)
 {
 	int			result;
-	char	   *nspname;
+	char	   *nspname,
+			   *addlevel;
 
 	/* See if the namespace name starts with "pg_temp_" or "pg_toast_temp_" */
 	nspname = get_namespace_name(namespaceId);
 	if (!nspname)
 		return InvalidBackendId;	/* no such namespace? */
 	if (strncmp(nspname, "pg_temp_", 8) == 0)
-		result = atoi(nspname + 8);
+	{
+		/* check for session id */
+		if ((addlevel = strstr(nspname + 8, "_")) != NULL)
+			result = atoi(addlevel + 1);
+		else
+			result = atoi(nspname + 8);
+	}
 	else if (strncmp(nspname, "pg_toast_temp_", 14) == 0)
-		result = atoi(nspname + 14);
+	{
+		if ((addlevel = strstr(nspname + 14, "_")) != NULL)
+			result = atoi(addlevel + 1);
+		else
+			result = atoi(nspname + 14);
+	}
 	else
 		result = InvalidBackendId;
 	pfree(nspname);
@@ -3309,8 +3311,11 @@ void
 SetTempNamespaceState(Oid tempNamespaceId, Oid tempToastNamespaceId)
 {
 	/* Worker should not have created its own namespaces ... */
-	Assert(myTempNamespace == InvalidOid);
-	Assert(myTempToastNamespace == InvalidOid);
+	if (!ActiveSession)
+	{
+		Assert(myTempNamespace == InvalidOid);
+		Assert(myTempToastNamespace == InvalidOid);
+	}
 	Assert(myTempNamespaceSubID == InvalidSubTransactionId);
 
 	/* Assign same namespace OIDs that leader has */
@@ -3830,6 +3835,24 @@ recomputeNamespacePath(void)
 	list_free(oidlist);
 }
 
+static Oid
+GetTempTableNamespace(void)
+{
+	if (ActiveSession)
+	{
+		if (!OidIsValid(ActiveSession->tempNamespace))
+			InitTempTableNamespace();
+		else
+			myTempNamespace = ActiveSession->tempNamespace;
+	}
+	else
+	{
+		if (!OidIsValid(myTempNamespace))
+			InitTempTableNamespace();
+	}
+	return myTempNamespace;
+}
+
 /*
  * InitTempTableNamespace
  *		Initialize temp table namespace on first use in a particular backend
@@ -3841,8 +3864,6 @@ InitTempTableNamespace(void)
 	Oid			namespaceId;
 	Oid			toastspaceId;
 
-	Assert(!OidIsValid(myTempNamespace));
-
 	/*
 	 * First, do permission check to see if we are authorized to make temp
 	 * tables.  We use a nonstandard error message here since "databasename:
@@ -3881,7 +3902,12 @@ InitTempTableNamespace(void)
 				(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
 				 errmsg("cannot create temporary tables during a parallel operation")));
 
-	snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d_%u",
+					ActiveSession->id, MyBackendId);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d",
+					MyBackendId);
 
 	namespaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(namespaceId))
@@ -3913,8 +3939,12 @@ InitTempTableNamespace(void)
 	 * it. (We assume there is no need to clean it out if it does exist, since
 	 * dropping a parent table should make its toast table go away.)
 	 */
-	snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d",
-			 MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d_%u",
+					ActiveSession->id, MyBackendId);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%u",
+					MyBackendId);
 
 	toastspaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(toastspaceId))
@@ -3945,6 +3975,11 @@ InitTempTableNamespace(void)
 	 */
 	MyProc->tempNamespaceId = namespaceId;
 
+	if (ActiveSession)
+	{
+		ActiveSession->tempNamespace = namespaceId;
+		ActiveSession->tempToastNamespace = toastspaceId;
+	}
 	/* It should not be done already. */
 	AssertState(myTempNamespaceSubID == InvalidSubTransactionId);
 	myTempNamespaceSubID = GetCurrentSubTransactionId();
@@ -3974,6 +4009,11 @@ AtEOXact_Namespace(bool isCommit, bool parallel)
 		{
 			myTempNamespace = InvalidOid;
 			myTempToastNamespace = InvalidOid;
+			if (ActiveSession)
+			{
+				ActiveSession->tempNamespace = InvalidOid;
+			   	ActiveSession->tempToastNamespace = InvalidOid;
+  	  		}
 			baseSearchPathValid = false;	/* need to rebuild list */
 
 			/*
@@ -4121,13 +4161,16 @@ RemoveTempRelations(Oid tempNamespaceId)
 static void
 RemoveTempRelationsCallback(int code, Datum arg)
 {
-	if (OidIsValid(myTempNamespace))	/* should always be true */
+	Oid		tempNamespace = ActiveSession ?
+		ActiveSession->tempNamespace : myTempNamespace;
+
+	if (OidIsValid(tempNamespace))	/* should always be true */
 	{
 		/* Need to ensure we have a usable transaction. */
 		AbortOutOfAnyTransaction();
 		StartTransactionCommand();
 
-		RemoveTempRelations(myTempNamespace);
+		RemoveTempRelations(tempNamespace);
 
 		CommitTransactionCommand();
 	}
@@ -4137,10 +4180,19 @@ RemoveTempRelationsCallback(int code, Datum arg)
  * Remove all temp tables from the temporary namespace.
  */
 void
-ResetTempTableNamespace(void)
+ResetTempTableNamespace(Oid npc)
 {
-	if (OidIsValid(myTempNamespace))
-		RemoveTempRelations(myTempNamespace);
+	if (OidIsValid(npc))
+	{
+		AbortOutOfAnyTransaction();
+		StartTransactionCommand();
+		RemoveTempRelations(npc);
+		CommitTransactionCommand();
+	}
+	else
+		/* global */
+		if (OidIsValid(myTempNamespace))
+			RemoveTempRelations(myTempNamespace);
 }
 
 
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index e123691..23ff527 100644
--- a/src/backend/catalog/pg_db_role_setting.c
+++ b/src/backend/catalog/pg_db_role_setting.c
@@ -16,6 +16,7 @@
 #include "catalog/indexing.h"
 #include "catalog/objectaccess.h"
 #include "catalog/pg_db_role_setting.h"
+#include "storage/proc.h"
 #include "utils/fmgroids.h"
 #include "utils/rel.h"
 #include "utils/tqual.h"
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index 5df4382..f57a950 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -24,6 +24,7 @@
 #include "access/xlog.h"
 #include "access/xloginsert.h"
 #include "access/xlogutils.h"
+#include "catalog/namespace.h"
 #include "catalog/storage.h"
 #include "catalog/storage_xlog.h"
 #include "storage/freespace.h"
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 9bc67ce..3c90f8d 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2447,7 +2447,7 @@ CopyFrom(CopyState cstate)
 		 * registers the snapshot it uses.
 		 */
 		InvalidateCatalogSnapshot();
-		if (!ThereAreNoPriorRegisteredSnapshots() || !ThereAreNoReadyPortals())
+		if (!ThereAreNoPriorRegisteredSnapshots() || (SessionPoolSize == 0 && !ThereAreNoReadyPortals()))
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 					 errmsg("cannot perform FREEZE because of prior transaction activity")));
diff --git a/src/backend/commands/discard.c b/src/backend/commands/discard.c
index 01a999c..363a52a 100644
--- a/src/backend/commands/discard.c
+++ b/src/backend/commands/discard.c
@@ -45,7 +45,7 @@ DiscardCommand(DiscardStmt *stmt, bool isTopLevel)
 			break;
 
 		case DISCARD_TEMP:
-			ResetTempTableNamespace();
+			ResetTempTableNamespace(InvalidOid);
 			break;
 
 		default:
@@ -73,6 +73,6 @@ DiscardAll(bool isTopLevel)
 	Async_UnlistenAll();
 	LockReleaseAll(USER_LOCKMETHOD, true);
 	ResetPlanCache();
-	ResetTempTableNamespace();
+	ResetTempTableNamespace(InvalidOid);
 	ResetSequenceCaches();
 }
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..1696500 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,9 +30,11 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
+#include "utils/memutils.h"
 #include "utils/snapmgr.h"
 #include "utils/timestamp.h"
 
@@ -43,9 +45,7 @@
  * The keys for this hash table are the arguments to PREPARE and EXECUTE
  * (statement names); the entries are PreparedStatement structs.
  */
-static HTAB *prepared_queries = NULL;
-
-static void InitQueryHashTable(void);
+static HTAB *InitQueryHashTable(MemoryContext mcxt);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
 static Datum build_regtype_array(Oid *param_types, int num_params);
@@ -427,20 +427,43 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
 /*
  * Initialize query hash table upon first use.
  */
-static void
-InitQueryHashTable(void)
+static HTAB *
+InitQueryHashTable(MemoryContext mcxt)
 {
-	HASHCTL		hash_ctl;
+	HTAB		   *res;
+	MemoryContext	old_mcxt;
+	HASHCTL			hash_ctl;
 
 	MemSet(&hash_ctl, 0, sizeof(hash_ctl));
 
 	hash_ctl.keysize = NAMEDATALEN;
 	hash_ctl.entrysize = sizeof(PreparedStatement);
+	hash_ctl.hcxt = mcxt;
+
+	old_mcxt = MemoryContextSwitchTo(mcxt);
+	res = hash_create("Prepared Queries", 32, &hash_ctl, HASH_ELEM | HASH_CONTEXT);
+	MemoryContextSwitchTo(old_mcxt);
 
-	prepared_queries = hash_create("Prepared Queries",
-								   32,
-								   &hash_ctl,
-								   HASH_ELEM);
+	return res;
+}
+
+static HTAB *
+get_prepared_queries_htab(bool init)
+{
+	static HTAB *prepared_queries = NULL;
+
+	if (ActiveSession)
+	{
+		if (init && !ActiveSession->prepared_queries)
+			ActiveSession->prepared_queries = InitQueryHashTable(ActiveSession->memory);
+		return ActiveSession->prepared_queries;
+	}
+
+	/* Initialize the global hash table, if necessary */
+	if (init && !prepared_queries)
+		prepared_queries = InitQueryHashTable(TopMemoryContext);
+
+	return prepared_queries;
 }
 
 /*
@@ -458,12 +481,9 @@ StorePreparedStatement(const char *stmt_name,
 	TimestampTz cur_ts = GetCurrentStatementStartTimestamp();
 	bool		found;
 
-	/* Initialize the hash table, if necessary */
-	if (!prepared_queries)
-		InitQueryHashTable();
 
 	/* Add entry to hash table */
-	entry = (PreparedStatement *) hash_search(prepared_queries,
+	entry = (PreparedStatement *) hash_search(get_prepared_queries_htab(true),
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
@@ -495,13 +515,14 @@ PreparedStatement *
 FetchPreparedStatement(const char *stmt_name, bool throwError)
 {
 	PreparedStatement *entry;
+	HTAB			  *queries = get_prepared_queries_htab(false);
 
 	/*
 	 * If the hash table hasn't been initialized, it can't be storing
 	 * anything, therefore it couldn't possibly store our plan.
 	 */
-	if (prepared_queries)
-		entry = (PreparedStatement *) hash_search(prepared_queries,
+	if (queries)
+		entry = (PreparedStatement *) hash_search(queries,
 												  stmt_name,
 												  HASH_FIND,
 												  NULL);
@@ -579,7 +600,11 @@ DeallocateQuery(DeallocateStmt *stmt)
 void
 DropPreparedStatement(const char *stmt_name, bool showError)
 {
-	PreparedStatement *entry;
+	PreparedStatement	*entry;
+	HTAB				*queries = get_prepared_queries_htab(false);
+
+	if (!queries)
+		return;
 
 	/* Find the query's hash table entry; raise error if wanted */
 	entry = FetchPreparedStatement(stmt_name, showError);
@@ -590,7 +615,7 @@ DropPreparedStatement(const char *stmt_name, bool showError)
 		DropCachedPlan(entry->plansource);
 
 		/* Now we can remove the hash table entry */
-		hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		hash_search(queries, entry->stmt_name, HASH_REMOVE, NULL);
 	}
 }
 
@@ -602,20 +627,21 @@ DropAllPreparedStatements(void)
 {
 	HASH_SEQ_STATUS seq;
 	PreparedStatement *entry;
+	HTAB			  *queries = get_prepared_queries_htab(false);
 
 	/* nothing cached */
-	if (!prepared_queries)
+	if (!queries)
 		return;
 
 	/* walk over cache */
-	hash_seq_init(&seq, prepared_queries);
+	hash_seq_init(&seq, queries);
 	while ((entry = hash_seq_search(&seq)) != NULL)
 	{
 		/* Release the plancache entry */
 		DropCachedPlan(entry->plansource);
 
 		/* Now we can remove the hash table entry */
-		hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		hash_search(queries, entry->stmt_name, HASH_REMOVE, NULL);
 	}
 }
 
@@ -710,10 +736,11 @@ Datum
 pg_prepared_statement(PG_FUNCTION_ARGS)
 {
 	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
-	TupleDesc	tupdesc;
+	TupleDesc		tupdesc;
 	Tuplestorestate *tupstore;
-	MemoryContext per_query_ctx;
-	MemoryContext oldcontext;
+	MemoryContext	per_query_ctx;
+	MemoryContext	oldcontext;
+	HTAB		   *queries;
 
 	/* check to see if caller supports us returning a tuplestore */
 	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
@@ -757,13 +784,13 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
 	/* generate junk in short-term context */
 	MemoryContextSwitchTo(oldcontext);
 
-	/* hash table might be uninitialized */
-	if (prepared_queries)
+	queries = get_prepared_queries_htab(false);
+	if (queries)
 	{
 		HASH_SEQ_STATUS hash_seq;
 		PreparedStatement *prep_stmt;
 
-		hash_seq_init(&hash_seq, prepared_queries);
+		hash_seq_init(&hash_seq, queries);
 		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
 		{
 			Datum		values[5];
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 89122d4..7843d9d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -90,8 +90,6 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
  * last_used_seq is updated by nextval() to point to the last used
  * sequence.
  */
-static SeqTableData *last_used_seq = NULL;
-
 static void fill_seq_with_data(Relation rel, HeapTuple tuple);
 static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
diff --git a/src/backend/libpq/be-secure.c b/src/backend/libpq/be-secure.c
index d349d7c..3afacee 100644
--- a/src/backend/libpq/be-secure.c
+++ b/src/backend/libpq/be-secure.c
@@ -144,6 +144,7 @@ secure_read(Port *port, void *ptr, size_t len)
 {
 	ssize_t		n;
 	int			waitfor;
+	WaitEventSet	*waitset = pq_get_current_waitset();
 
 retry:
 #ifdef USE_SSL
@@ -166,9 +167,9 @@ retry:
 
 		Assert(waitfor);
 
-		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+		ModifyWaitEvent(waitset, 0, waitfor, NULL);
 
-		WaitEventSetWait(FeBeWaitSet, -1 /* no timeout */ , &event, 1,
+		WaitEventSetWait(waitset, -1 /* no timeout */ , &event, 1,
 						 WAIT_EVENT_CLIENT_READ);
 
 		/*
@@ -247,6 +248,7 @@ secure_write(Port *port, void *ptr, size_t len)
 {
 	ssize_t		n;
 	int			waitfor;
+	WaitEventSet	*waitset = pq_get_current_waitset();
 
 retry:
 	waitfor = 0;
@@ -268,9 +270,9 @@ retry:
 
 		Assert(waitfor);
 
-		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+		ModifyWaitEvent(waitset, 0, waitfor, NULL);
 
-		WaitEventSetWait(FeBeWaitSet, -1 /* no timeout */ , &event, 1,
+		WaitEventSetWait(waitset, -1 /* no timeout */ , &event, 1,
 						 WAIT_EVENT_CLIENT_WRITE);
 
 		/* See comments in secure_read. */
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..51d4f0b 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -13,7 +13,7 @@
  * copy is aborted by an ereport(ERROR), we need to close out the copy so that
  * the frontend gets back into sync.  Therefore, these routines have to be
  * aware of COPY OUT state.  (New COPY-OUT is message-based and does *not*
- * set the DoingCopyOut flag.)
+ * set the is_doing_copyout flag.)
  *
  * NOTE: generally, it's a bad idea to emit outgoing messages directly with
  * pq_putbytes(), especially if the message would require multiple calls
@@ -87,12 +87,14 @@
 #ifdef _MSC_VER					/* mstcpip.h is missing on mingw */
 #include <mstcpip.h>
 #endif
+#include <execinfo.h>
 
 #include "common/ip.h"
 #include "libpq/libpq.h"
 #include "miscadmin.h"
 #include "port/pg_bswap.h"
 #include "storage/ipc.h"
+#include "storage/proc.h"
 #include "utils/guc.h"
 #include "utils/memutils.h"
 
@@ -134,23 +136,6 @@ static List *sock_paths = NIL;
 #define PQ_SEND_BUFFER_SIZE 8192
 #define PQ_RECV_BUFFER_SIZE 8192
 
-static char *PqSendBuffer;
-static int	PqSendBufferSize;	/* Size send buffer */
-static int	PqSendPointer;		/* Next index to store a byte in PqSendBuffer */
-static int	PqSendStart;		/* Next index to send a byte in PqSendBuffer */
-
-static char PqRecvBuffer[PQ_RECV_BUFFER_SIZE];
-static int	PqRecvPointer;		/* Next index to read a byte from PqRecvBuffer */
-static int	PqRecvLength;		/* End of data available in PqRecvBuffer */
-
-/*
- * Message status
- */
-static bool PqCommBusy;			/* busy sending data to the client */
-static bool PqCommReadingMsg;	/* in the middle of reading a message */
-static bool DoingCopyOut;		/* in old-protocol COPY OUT processing */
-
-
 /* Internal functions */
 static void socket_comm_reset(void);
 static void socket_close(int code, Datum arg);
@@ -181,28 +166,55 @@ static PQcommMethods PqCommSocketMethods = {
 	socket_endcopyout
 };
 
-PQcommMethods *PqCommMethods = &PqCommSocketMethods;
+/* These variables used to be global */
+struct PQcommState {
+	Port		   *port;
+	MemoryContext	mcxt;
 
-WaitEventSet *FeBeWaitSet;
+	/* Message status */
+	bool	is_busy;			/* busy sending data to the client */
+	bool	is_reading;			/* in the middle of reading a message */
+	bool	is_doing_copyout;	/* in old-protocol COPY OUT processing */
+	char   *send_buf;
 
+	int		send_bufsize;	/* Size send buffer */
+	int		send_offset;	/* Next index to store a byte in send_buf */
+	int		send_start;		/* Next index to send a byte in send_buf */
 
-/* --------------------------------
- *		pq_init - initialize libpq at backend startup
- * --------------------------------
+	char	recv_buf[PQ_RECV_BUFFER_SIZE];
+	int		recv_offset;	/* Next index to read a byte from pqstate->recv_buf */
+	int		recv_len;		/* End of data available in pqstate->recv_buf */
+
+	/* Wait events set */
+	WaitEventSet *wait_events;
+};
+
+static struct PQcommState *pqstate = NULL;
+PQcommMethods *PqCommMethods = &PqCommSocketMethods;
+
+/*
+ * Create common wait event for a backend
  */
-void
-pq_init(void)
+WaitEventSet *
+pq_create_backend_event_set(MemoryContext mcxt, Port *port,
+							bool onlySock)
 {
-	/* initialize state variables */
-	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
-	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
-	PqCommBusy = false;
-	PqCommReadingMsg = false;
-	DoingCopyOut = false;
+	WaitEventSet *result;
+	int				nevents = onlySock ? 1 : 3;
+
+	result = CreateWaitEventSet(mcxt, nevents);
+
+	AddWaitEventToSet(result, WL_SOCKET_WRITEABLE, port->sock,
+					  NULL, NULL);
+
+	if (!onlySock)
+	{
+		AddWaitEventToSet(result, WL_LATCH_SET, -1, MyLatch, NULL);
+		AddWaitEventToSet(result, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
+		/* set up process-exit hook to close the socket */
+		on_proc_exit(socket_close, 0);
+	}
 
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
@@ -215,16 +227,65 @@ pq_init(void)
 	 * infinite recursion.
 	 */
 #ifndef WIN32
-	if (!pg_set_noblock(MyProcPort->sock))
+	if (!pg_set_noblock(port->sock))
 		ereport(COMMERROR,
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
-	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
-	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
-					  NULL, NULL);
-	AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
-	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
+	return result;
+}
+
+/* --------------------------------
+ *		pq_init - initialize libpq at backend startup
+ * --------------------------------
+ */
+void *
+pq_init(MemoryContext mcxt)
+{
+	struct PQcommState *state =
+		MemoryContextAllocZero(mcxt, sizeof(struct PQcommState));
+
+	/* initialize state variables */
+	state->mcxt = mcxt;
+
+	state->send_bufsize = PQ_SEND_BUFFER_SIZE;
+	state->send_buf = MemoryContextAlloc(mcxt, state->send_bufsize);
+	state->send_offset = state->send_start = state->recv_offset = state->recv_len = 0;
+	state->is_busy = false;
+	state->is_reading = false;
+	state->is_doing_copyout = false;
+
+	state->wait_events = NULL;
+	return (void *) state;
+}
+
+void
+pq_set_current_state(void *state, Port *port, WaitEventSet *set)
+{
+	pqstate = (struct PQcommState *) state;
+
+	if (pqstate)
+	{
+		pq_reset();
+		pqstate->port = port;
+		pqstate->wait_events = set;
+	}
+}
+
+WaitEventSet *
+pq_get_current_waitset(void)
+{
+	return pqstate ? pqstate->wait_events : NULL;
+}
+
+void
+pq_reset(void)
+{
+	pqstate->send_offset = pqstate->send_start = 0;
+	pqstate->recv_offset = pqstate->recv_len = 0;
+	pqstate->is_busy = false;
+	pqstate->is_reading = false;
+	pqstate->is_doing_copyout = false;
 }
 
 /* --------------------------------
@@ -239,7 +300,7 @@ static void
 socket_comm_reset(void)
 {
 	/* Do not throw away pending data, but do reset the busy flag */
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	/* We can abort any old-style COPY OUT, too */
 	pq_endcopyout(true);
 }
@@ -255,8 +316,8 @@ socket_comm_reset(void)
 static void
 socket_close(int code, Datum arg)
 {
-	/* Nothing to do in a standalone backend, where MyProcPort is NULL. */
-	if (MyProcPort != NULL)
+	/* Nothing to do in a standalone backend, where pqstate->port is NULL. */
+	if (pqstate->port != NULL)
 	{
 #if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
 #ifdef ENABLE_GSS
@@ -267,11 +328,11 @@ socket_close(int code, Datum arg)
 		 * BackendInitialize(), because pg_GSS_recvauth() makes first use of
 		 * "ctx" and "cred".
 		 */
-		if (MyProcPort->gss->ctx != GSS_C_NO_CONTEXT)
-			gss_delete_sec_context(&min_s, &MyProcPort->gss->ctx, NULL);
+		if (pqstate->port->gss->ctx != GSS_C_NO_CONTEXT)
+			gss_delete_sec_context(&min_s, &pqstate->port->gss->ctx, NULL);
 
-		if (MyProcPort->gss->cred != GSS_C_NO_CREDENTIAL)
-			gss_release_cred(&min_s, &MyProcPort->gss->cred);
+		if (pqstate->port->gss->cred != GSS_C_NO_CREDENTIAL)
+			gss_release_cred(&min_s, &pqstate->port->gss->cred);
 #endif							/* ENABLE_GSS */
 
 		/*
@@ -279,14 +340,14 @@ socket_close(int code, Datum arg)
 		 * postmaster child free this, doing so is safe when interrupting
 		 * BackendInitialize().
 		 */
-		free(MyProcPort->gss);
+		free(pqstate->port->gss);
 #endif							/* ENABLE_GSS || ENABLE_SSPI */
 
 		/*
 		 * Cleanly shut down SSL layer.  Nowhere else does a postmaster child
 		 * call this, so this is safe when interrupting BackendInitialize().
 		 */
-		secure_close(MyProcPort);
+		secure_close(pqstate->port);
 
 		/*
 		 * Formerly we did an explicit close() here, but it seems better to
@@ -298,7 +359,7 @@ socket_close(int code, Datum arg)
 		 * We do set sock to PGINVALID_SOCKET to prevent any further I/O,
 		 * though.
 		 */
-		MyProcPort->sock = PGINVALID_SOCKET;
+		pqstate->port->sock = PGINVALID_SOCKET;
 	}
 }
 
@@ -921,12 +982,12 @@ RemoveSocketFiles(void)
 static void
 socket_set_nonblocking(bool nonblocking)
 {
-	if (MyProcPort == NULL)
+	if (pqstate->port == NULL)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),
 				 errmsg("there is no client connection")));
 
-	MyProcPort->noblock = nonblocking;
+	pqstate->port->noblock = nonblocking;
 }
 
 /* --------------------------------
@@ -938,30 +999,30 @@ socket_set_nonblocking(bool nonblocking)
 static int
 pq_recvbuf(void)
 {
-	if (PqRecvPointer > 0)
+	if (pqstate->recv_offset > 0)
 	{
-		if (PqRecvLength > PqRecvPointer)
+		if (pqstate->recv_len > pqstate->recv_offset)
 		{
 			/* still some unread data, left-justify it in the buffer */
-			memmove(PqRecvBuffer, PqRecvBuffer + PqRecvPointer,
-					PqRecvLength - PqRecvPointer);
-			PqRecvLength -= PqRecvPointer;
-			PqRecvPointer = 0;
+			memmove(pqstate->recv_buf, pqstate->recv_buf + pqstate->recv_offset,
+					pqstate->recv_len - pqstate->recv_offset);
+			pqstate->recv_len -= pqstate->recv_offset;
+			pqstate->recv_offset = 0;
 		}
 		else
-			PqRecvLength = PqRecvPointer = 0;
+			pqstate->recv_len = pqstate->recv_offset = 0;
 	}
 
 	/* Ensure that we're in blocking mode */
 	socket_set_nonblocking(false);
 
-	/* Can fill buffer from PqRecvLength and upwards */
+	/* Can fill buffer from pqstate->recv_len and upwards */
 	for (;;)
 	{
 		int			r;
 
-		r = secure_read(MyProcPort, PqRecvBuffer + PqRecvLength,
-						PQ_RECV_BUFFER_SIZE - PqRecvLength);
+		r = secure_read(pqstate->port, pqstate->recv_buf + pqstate->recv_len,
+						PQ_RECV_BUFFER_SIZE - pqstate->recv_len);
 
 		if (r < 0)
 		{
@@ -987,7 +1048,7 @@ pq_recvbuf(void)
 			return EOF;
 		}
 		/* r contains number of bytes read, so just incr length */
-		PqRecvLength += r;
+		pqstate->recv_len += r;
 		return 0;
 	}
 }
@@ -999,14 +1060,14 @@ pq_recvbuf(void)
 int
 pq_getbyte(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	while (PqRecvPointer >= PqRecvLength)
+	while (pqstate->recv_offset >= pqstate->recv_len)
 	{
 		if (pq_recvbuf())		/* If nothing in buffer, then recv some */
 			return EOF;			/* Failed to recv data */
 	}
-	return (unsigned char) PqRecvBuffer[PqRecvPointer++];
+	return (unsigned char) pqstate->recv_buf[pqstate->recv_offset++];
 }
 
 /* --------------------------------
@@ -1018,14 +1079,25 @@ pq_getbyte(void)
 int
 pq_peekbyte(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	while (PqRecvPointer >= PqRecvLength)
+	while (pqstate->recv_offset >= pqstate->recv_len)
 	{
 		if (pq_recvbuf())		/* If nothing in buffer, then recv some */
 			return EOF;			/* Failed to recv data */
 	}
-	return (unsigned char) PqRecvBuffer[PqRecvPointer];
+	return (unsigned char) pqstate->recv_buf[pqstate->recv_offset];
+}
+
+/* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return pqstate->recv_len - pqstate->recv_offset;
 }
 
 /* --------------------------------
@@ -1041,18 +1113,18 @@ pq_getbyte_if_available(unsigned char *c)
 {
 	int			r;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	if (PqRecvPointer < PqRecvLength)
+	if (pqstate->recv_offset < pqstate->recv_len)
 	{
-		*c = PqRecvBuffer[PqRecvPointer++];
+		*c = pqstate->recv_buf[pqstate->recv_offset++];
 		return 1;
 	}
 
 	/* Put the socket into non-blocking mode */
 	socket_set_nonblocking(true);
 
-	r = secure_read(MyProcPort, c, 1);
+	r = secure_read(pqstate->port, c, 1);
 	if (r < 0)
 	{
 		/*
@@ -1095,20 +1167,20 @@ pq_getbytes(char *s, size_t len)
 {
 	size_t		amount;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	while (len > 0)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
-		amount = PqRecvLength - PqRecvPointer;
+		amount = pqstate->recv_len - pqstate->recv_offset;
 		if (amount > len)
 			amount = len;
-		memcpy(s, PqRecvBuffer + PqRecvPointer, amount);
-		PqRecvPointer += amount;
+		memcpy(s, pqstate->recv_buf + pqstate->recv_offset, amount);
+		pqstate->recv_offset += amount;
 		s += amount;
 		len -= amount;
 	}
@@ -1129,19 +1201,19 @@ pq_discardbytes(size_t len)
 {
 	size_t		amount;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	while (len > 0)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
-		amount = PqRecvLength - PqRecvPointer;
+		amount = pqstate->recv_len - pqstate->recv_offset;
 		if (amount > len)
 			amount = len;
-		PqRecvPointer += amount;
+		pqstate->recv_offset += amount;
 		len -= amount;
 	}
 	return 0;
@@ -1167,35 +1239,35 @@ pq_getstring(StringInfo s)
 {
 	int			i;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	resetStringInfo(s);
 
 	/* Read until we get the terminating '\0' */
 	for (;;)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
 
-		for (i = PqRecvPointer; i < PqRecvLength; i++)
+		for (i = pqstate->recv_offset; i < pqstate->recv_len; i++)
 		{
-			if (PqRecvBuffer[i] == '\0')
+			if (pqstate->recv_buf[i] == '\0')
 			{
 				/* include the '\0' in the copy */
-				appendBinaryStringInfo(s, PqRecvBuffer + PqRecvPointer,
-									   i - PqRecvPointer + 1);
-				PqRecvPointer = i + 1;	/* advance past \0 */
+				appendBinaryStringInfo(s, pqstate->recv_buf + pqstate->recv_offset,
+									   i - pqstate->recv_offset + 1);
+				pqstate->recv_offset = i + 1;	/* advance past \0 */
 				return 0;
 			}
 		}
 
 		/* If we're here we haven't got the \0 in the buffer yet. */
-		appendBinaryStringInfo(s, PqRecvBuffer + PqRecvPointer,
-							   PqRecvLength - PqRecvPointer);
-		PqRecvPointer = PqRecvLength;
+		appendBinaryStringInfo(s, pqstate->recv_buf + pqstate->recv_offset,
+							   pqstate->recv_len - pqstate->recv_offset);
+		pqstate->recv_offset = pqstate->recv_len;
 	}
 }
 
@@ -1213,12 +1285,12 @@ pq_startmsgread(void)
 	 * There shouldn't be a read active already, but let's check just to be
 	 * sure.
 	 */
-	if (PqCommReadingMsg)
+	if (pqstate->is_reading)
 		ereport(FATAL,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
 				 errmsg("terminating connection because protocol synchronization was lost")));
 
-	PqCommReadingMsg = true;
+	pqstate->is_reading = true;
 }
 
 
@@ -1233,9 +1305,9 @@ pq_startmsgread(void)
 void
 pq_endmsgread(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	PqCommReadingMsg = false;
+	pqstate->is_reading = false;
 }
 
 /* --------------------------------
@@ -1249,7 +1321,7 @@ pq_endmsgread(void)
 bool
 pq_is_reading_msg(void)
 {
-	return PqCommReadingMsg;
+	return pqstate->is_reading;
 }
 
 /* --------------------------------
@@ -1273,7 +1345,7 @@ pq_getmessage(StringInfo s, int maxlen)
 {
 	int32		len;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	resetStringInfo(s);
 
@@ -1318,7 +1390,7 @@ pq_getmessage(StringInfo s, int maxlen)
 						 errmsg("incomplete message from client")));
 
 			/* we discarded the rest of the message so we're back in sync. */
-			PqCommReadingMsg = false;
+			pqstate->is_reading = false;
 			PG_RE_THROW();
 		}
 		PG_END_TRY();
@@ -1337,7 +1409,7 @@ pq_getmessage(StringInfo s, int maxlen)
 	}
 
 	/* finished reading the message. */
-	PqCommReadingMsg = false;
+	pqstate->is_reading = false;
 
 	return 0;
 }
@@ -1355,13 +1427,13 @@ pq_putbytes(const char *s, size_t len)
 	int			res;
 
 	/* Should only be called by old-style COPY OUT */
-	Assert(DoingCopyOut);
+	Assert(pqstate->is_doing_copyout);
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	res = internal_putbytes(s, len);
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1373,23 +1445,24 @@ internal_putbytes(const char *s, size_t len)
 	while (len > 0)
 	{
 		/* If buffer is full, then flush it out */
-		if (PqSendPointer >= PqSendBufferSize)
+		if (pqstate->send_offset >= pqstate->send_bufsize)
 		{
 			socket_set_nonblocking(false);
 			if (internal_flush())
 				return EOF;
 		}
-		amount = PqSendBufferSize - PqSendPointer;
+		amount = pqstate->send_bufsize - pqstate->send_offset;
 		if (amount > len)
 			amount = len;
-		memcpy(PqSendBuffer + PqSendPointer, s, amount);
-		PqSendPointer += amount;
+		memcpy(pqstate->send_buf + pqstate->send_offset, s, amount);
+		pqstate->send_offset += amount;
 		s += amount;
 		len -= amount;
 	}
 	return 0;
 }
 
+
 /* --------------------------------
  *		socket_flush		- flush pending output
  *
@@ -1401,13 +1474,17 @@ socket_flush(void)
 {
 	int			res;
 
+	if (pqstate->port->sock == PGINVALID_SOCKET)
+		return 0;
+
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+
+	pqstate->is_busy = true;
 	socket_set_nonblocking(false);
 	res = internal_flush();
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1423,14 +1500,14 @@ internal_flush(void)
 {
 	static int	last_reported_send_errno = 0;
 
-	char	   *bufptr = PqSendBuffer + PqSendStart;
-	char	   *bufend = PqSendBuffer + PqSendPointer;
+	char	   *bufptr = pqstate->send_buf + pqstate->send_start;
+	char	   *bufend = pqstate->send_buf + pqstate->send_offset;
 
 	while (bufptr < bufend)
 	{
 		int			r;
 
-		r = secure_write(MyProcPort, bufptr, bufend - bufptr);
+		r = secure_write(pqstate->port, bufptr, bufend - bufptr);
 
 		if (r <= 0)
 		{
@@ -1470,7 +1547,7 @@ internal_flush(void)
 			 * flag that'll cause the next CHECK_FOR_INTERRUPTS to terminate
 			 * the connection.
 			 */
-			PqSendStart = PqSendPointer = 0;
+			pqstate->send_start = pqstate->send_offset = 0;
 			ClientConnectionLost = 1;
 			InterruptPending = 1;
 			return EOF;
@@ -1478,10 +1555,10 @@ internal_flush(void)
 
 		last_reported_send_errno = 0;	/* reset after any successful send */
 		bufptr += r;
-		PqSendStart += r;
+		pqstate->send_start += r;
 	}
 
-	PqSendStart = PqSendPointer = 0;
+	pqstate->send_start = pqstate->send_offset = 0;
 	return 0;
 }
 
@@ -1496,20 +1573,23 @@ socket_flush_if_writable(void)
 {
 	int			res;
 
+	if (pqstate->port->sock == PGINVALID_SOCKET)
+		return 0;
+
 	/* Quick exit if nothing to do */
-	if (PqSendPointer == PqSendStart)
+	if (pqstate->send_offset == pqstate->send_start)
 		return 0;
 
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
 
 	/* Temporarily put the socket into non-blocking mode */
 	socket_set_nonblocking(true);
 
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	res = internal_flush();
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1520,7 +1600,7 @@ socket_flush_if_writable(void)
 static bool
 socket_is_send_pending(void)
 {
-	return (PqSendStart < PqSendPointer);
+	return (pqstate->send_start < pqstate->send_offset);
 }
 
 /* --------------------------------
@@ -1559,9 +1639,9 @@ socket_is_send_pending(void)
 static int
 socket_putmessage(char msgtype, const char *s, size_t len)
 {
-	if (DoingCopyOut || PqCommBusy)
+	if (pqstate->is_doing_copyout || pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	if (msgtype)
 		if (internal_putbytes(&msgtype, 1))
 			goto fail;
@@ -1575,11 +1655,11 @@ socket_putmessage(char msgtype, const char *s, size_t len)
 	}
 	if (internal_putbytes(s, len))
 		goto fail;
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return 0;
 
 fail:
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return EOF;
 }
 
@@ -1599,11 +1679,11 @@ socket_putmessage_noblock(char msgtype, const char *s, size_t len)
 	 * Ensure we have enough space in the output buffer for the message header
 	 * as well as the message itself.
 	 */
-	required = PqSendPointer + 1 + 4 + len;
-	if (required > PqSendBufferSize)
+	required = pqstate->send_offset + 1 + 4 + len;
+	if (required > pqstate->send_bufsize)
 	{
-		PqSendBuffer = repalloc(PqSendBuffer, required);
-		PqSendBufferSize = required;
+		pqstate->send_buf = repalloc(pqstate->send_buf, required);
+		pqstate->send_bufsize = required;
 	}
 	res = pq_putmessage(msgtype, s, len);
 	Assert(res == 0);			/* should not fail when the message fits in
@@ -1619,7 +1699,7 @@ socket_putmessage_noblock(char msgtype, const char *s, size_t len)
 static void
 socket_startcopyout(void)
 {
-	DoingCopyOut = true;
+	pqstate->is_doing_copyout = true;
 }
 
 /* --------------------------------
@@ -1635,12 +1715,12 @@ socket_startcopyout(void)
 static void
 socket_endcopyout(bool errorAbort)
 {
-	if (!DoingCopyOut)
+	if (!pqstate->is_doing_copyout)
 		return;
 	if (errorAbort)
 		pq_putbytes("\n\n\\.\n", 5);
 	/* in non-error case, copy.c will have emitted the terminator line */
-	DoingCopyOut = false;
+	pqstate->is_doing_copyout = false;
 }
 
 /*
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..b69cc78
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+		return PGINVALID_SOCKET;
+
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+		return PGINVALID_SOCKET;
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..7fd901f 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -726,3 +726,65 @@ pgwin32_socket_strerror(int err)
 	}
 	return wserrbuf;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..b0bd173 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o \
+	connpool.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index d2b695e..15b9eb5 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -21,6 +21,7 @@
 #include "port/atomics.h"
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/logicalworker.h"
 #include "storage/dsm.h"
@@ -129,7 +130,10 @@ static const struct
 	},
 	{
 		"ApplyWorkerMain", ApplyWorkerMain
-	}
+	},
+	{
+		"StartupPacketReaderMain", StartupPacketReaderMain
+ 	}
 };
 
 /* Private functions. */
diff --git a/src/backend/postmaster/connpool.c b/src/backend/postmaster/connpool.c
new file mode 100644
index 0000000..e2d041a
--- /dev/null
+++ b/src/backend/postmaster/connpool.c
@@ -0,0 +1,269 @@
+/*-------------------------------------------------------------------------
+ * connpool.c
+ *	   PostgreSQL connection pool workers.
+ *
+ * Copyright (c) 2018, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *    src/backend/postmaster/connpool.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <signal.h>
+#include <unistd.h>
+
+#include "lib/stringinfo.h"
+#include "libpq/libpq.h"
+#include "libpq/pqformat.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/bgworker.h"
+#include "postmaster/connpool.h"
+#include "postmaster/postmaster.h"
+#include "storage/proc.h"
+#include "utils/memutils.h"
+#include "utils/resowner.h"
+#include "tcop/tcopprot.h"
+
+/*
+ * GUC parameters
+ */
+int			NumConnPoolWorkers = 2;
+
+/*
+ * Global variables
+ */
+ConnPoolWorker	*ConnPoolWorkers;
+
+/*
+ * Signals management
+ */
+static volatile sig_atomic_t shutdown_requested = false;
+static void handle_sigterm(SIGNAL_ARGS);
+
+static void *pqstate;
+
+static void
+handle_sigterm(SIGNAL_ARGS)
+{
+	int save_errno = errno;
+	shutdown_requested = true;
+	SetLatch(&MyProc->procLatch);
+	errno = save_errno;
+}
+
+Size
+ConnPoolShmemSize(void)
+{
+	return MAXALIGN(sizeof(ConnPoolWorker) * NumConnPoolWorkers);
+}
+
+void
+ConnectionPoolWorkersInit(void)
+{
+	int		i;
+	bool	found;
+	Size	size = ConnPoolShmemSize();
+
+	ConnPoolWorkers = ShmemInitStruct("connection pool workers",
+			size, &found);
+
+	if (!found)
+	{
+		MemSet(ConnPoolWorkers, 0, size);
+		for (i = 0; i < NumConnPoolWorkers; i++)
+		{
+			ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+			if (socketpair(AF_UNIX, SOCK_STREAM, 0, worker->pipes) < 0)
+				elog(FATAL, "could not create socket pair for connection pool");
+		}
+	}
+}
+
+/*
+ * Register background workers for startup packet reading.
+ */
+void
+RegisterConnPoolWorkers(void)
+{
+	int					i;
+	BackgroundWorker	bgw;
+
+	if (SessionPoolSize == 0)
+		/* no need to start workers */
+		return;
+
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		memset(&bgw, 0, sizeof(bgw));
+		bgw.bgw_flags = BGWORKER_SHMEM_ACCESS;
+		bgw.bgw_start_time = BgWorkerStart_PostmasterStart;
+		snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
+		snprintf(bgw.bgw_function_name, BGW_MAXLEN, "StartupPacketReaderMain");
+		snprintf(bgw.bgw_name, BGW_MAXLEN,
+				 "connection pool worker %d", i + 1);
+		bgw.bgw_restart_time = 3;
+		bgw.bgw_notify_pid = 0;
+		bgw.bgw_main_arg = (Datum) i;
+
+		RegisterBackgroundWorker(&bgw);
+	}
+
+	elog(LOG, "Connection pool have been started");
+}
+
+static void
+resetWorkerState(ConnPoolWorker *worker, Port *port)
+{
+	/* Cleanup */
+	whereToSendOutput = DestNone;
+	if (port != NULL)
+	{
+		if (port->sock != PGINVALID_SOCKET)
+			closesocket(port->sock);
+		if (port->pqcomm_waitset != NULL)
+			FreeWaitEventSet(port->pqcomm_waitset);
+		port = NULL;
+	}
+	pq_set_current_state(pqstate, NULL, NULL);
+}
+
+void
+StartupPacketReaderMain(Datum arg)
+{
+	sigjmp_buf	local_sigjmp_buf;
+	ConnPoolWorker *worker = &ConnPoolWorkers[(int) arg];
+	MemoryContext	mcxt;
+	int				status;
+	Port		   *port = NULL;
+
+	pqsignal(SIGTERM, handle_sigterm);
+	BackgroundWorkerUnblockSignals();
+
+	mcxt = AllocSetContextCreate(TopMemoryContext,
+								 "temporary context",
+							     ALLOCSET_DEFAULT_SIZES);
+	pqstate = pq_init(TopMemoryContext);
+	worker->pid = MyProcPid;
+	worker->latch = MyLatch;
+	Assert(MyLatch == &MyProc->procLatch);
+
+	MemoryContextSwitchTo(mcxt);
+
+	/* In an exception is encountered, processing resumes here */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Since not using PG_TRY, must reset error stack by hand */
+		error_context_stack = NULL;
+
+		/* Prevent interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log and to the client */
+		EmitErrorReport();
+
+		/*
+		 * Now return to normal top-level context and clear ErrorContext for
+		 * next time.
+		 */
+		MemoryContextSwitchTo(mcxt);
+		FlushErrorState();
+
+		/*
+		 * We only reset worker state here, but memory will be cleaned
+		 * after next cycle. That's enough for now.
+		 */
+		resetWorkerState(worker, port);
+
+		/* Ready for new sockets */
+		worker->state = CPW_FREE;
+
+		/* Now we can allow interrupts again */
+		RESUME_INTERRUPTS();
+	}
+
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	while (!shutdown_requested)
+	{
+		ListCell	   *lc;
+		int				rc;
+		StringInfoData	buf;
+
+		rc = WaitLatch(&MyProc->procLatch,
+				WL_LATCH_SET | WL_POSTMASTER_DEATH,
+				0, PG_WAIT_EXTENSION);
+
+		if (rc & WL_POSTMASTER_DEATH)
+			break;
+
+		ResetLatch(&MyProc->procLatch);
+
+		if (shutdown_requested)
+			break;
+
+		if (worker->state != CPW_NEW_SOCKET)
+			/* we woke up for other reason */
+			continue;
+
+		/* Set up temporary pq state for startup packet */
+		port = palloc0(sizeof(Port));
+		port->sock = PGINVALID_SOCKET;
+
+		while (port->sock == PGINVALID_SOCKET)
+			port->sock = pg_recv_sock(worker->pipes[1]);
+
+		/* init pqcomm */
+		port->pqcomm_waitset = pq_create_backend_event_set(mcxt, port, true);
+		port->canAcceptConnections = worker->cac_state;
+		pq_set_current_state(pqstate, port, port->pqcomm_waitset);
+		whereToSendOutput = DestRemote;
+
+		/* TODO: deal with timeouts */
+		status = ProcessStartupPacket(port, false, mcxt, ERROR);
+		if (status != STATUS_OK)
+		{
+			worker->state = CPW_FREE;
+			goto cleanup;
+		}
+
+		/* Serialize a port into stringinfo */
+		pq_beginmessage(&buf, 'P');
+		pq_sendint(&buf, port->proto, 4);
+		pq_sendstring(&buf, port->database_name);
+		pq_sendstring(&buf, port->user_name);
+		pq_sendint(&buf, list_length(port->guc_options), 4);
+
+		foreach(lc, port->guc_options)
+		{
+			char *str = (char *) lfirst(lc);
+			pq_sendstring(&buf, str);
+		}
+
+		if (port->cmdline_options)
+		{
+			pq_sendint(&buf, 1, 4);
+			pq_sendstring(&buf, port->cmdline_options);
+		}
+		else pq_sendint(&buf, 0, 4);
+
+		worker->state = CPW_PROCESSED;
+
+		while ((rc = send(worker->pipes[1], &buf.len, sizeof(buf.len), 0)) < 0 && errno == EINTR);
+		if (rc != (int)sizeof(buf.len))
+			elog(ERROR, "could not send data to postmaster");
+		while ((rc = send(worker->pipes[1], buf.data, buf.len, 0)) < 0 && errno == EINTR);
+		if (rc != buf.len)
+			elog(ERROR, "could not send data to postmaster");
+		pfree(buf.data);
+		buf.data = NULL;
+	  cleanup:
+		resetWorkerState(worker, port);
+		MemoryContextReset(mcxt);
+	}
+
+	resetWorkerState(worker, NULL);
+}
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 8a5b2b3..8bdc988 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -868,7 +868,8 @@ pgstat_report_stat(bool force)
 			PgStat_TableEntry *this_ent;
 
 			/* Shouldn't have any pending transaction-dependent counts */
-			Assert(entry->trans == NULL);
+			if (entry->trans != NULL)
+				continue;
 
 			/*
 			 * Ignore entries that didn't accumulate any actual counts, such
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index a4b53b3..56fef63 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -76,6 +76,7 @@
 #include <sys/param.h>
 #include <netdb.h>
 #include <limits.h>
+#include <pthread.h>
 
 #ifdef HAVE_SYS_SELECT_H
 #include <sys/select.h>
@@ -114,6 +115,7 @@
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
 #include "postmaster/syslogger.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
 #include "storage/fd.h"
@@ -170,6 +172,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -178,8 +181,11 @@ typedef struct bkend
 	 */
 	int			bkend_type;
 	bool		dead_end;		/* is it going to send an error and quit? */
-	bool		bgworker_notify;	/* gets bgworker start/stop notifications */
+	bool		bgworker_notify;/* gets bgworker start/stop notifications */
 	dlist_node	elem;			/* list link in BackendList */
+	int         session_pool_id;/* identifier of backends session pool */
+	int         worker_id;      /* identifier of worker within session pool */
+	void	   *pool;			/* pool of backends */
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
@@ -190,7 +196,27 @@ static Backend *ShmemBackendArray;
 
 BackgroundWorker *MyBgworkerEntry = NULL;
 
+struct DatabasePoolKey {
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+};
+
+typedef struct DatabasePool
+{
+	struct DatabasePoolKey key;
 
+	Backend	  **workers;	/* pool backends */
+	int			n_workers;	/* number of launched worker backends
+							   in this pool so far */
+	int			rr_index;	/* index of current backends used to implement
+							 * round-robin distribution of sessions through
+							 * backends */
+} DatabasePool;
+
+static struct
+{
+	HTAB			   *pools;
+} PostmasterSessionPool;
 
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
@@ -214,7 +240,7 @@ int			ReservedBackends;
 
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
-static pgsocket ListenSocket[MAXLISTEN];
+static pgsocket ListenSocket[MAXLISTEN + MAX_CONNPOOL_WORKERS];
 
 /*
  * Set by the -o option
@@ -393,15 +419,19 @@ static void unlink_external_pid_file(int status, Datum arg);
 static void getInstallationPaths(const char *argv0);
 static void checkControlFile(void);
 static Port *ConnCreate(int serverFd);
+static Port *PoolConnCreate(pgsocket poolFd, int workerId);
 static void ConnFree(Port *port);
+static void ConnDispatch(Port *port);
 static void reset_shared(int port);
 static void SIGHUP_handler(SIGNAL_ARGS);
+static CAC_state canAcceptConnections(void);
 static void pmdie(SIGNAL_ARGS);
 static void reaper(SIGNAL_ARGS);
 static void sigusr1_handler(SIGNAL_ARGS);
 static void startup_die(SIGNAL_ARGS);
 static void dummy_handler(SIGNAL_ARGS);
 static void StartupPacketTimeoutHandler(void);
+static int BackendStartup(DatabasePool *pool, Port *port);
 static void CleanupBackend(int pid, int exitstatus);
 static bool CleanupBackgroundWorker(int pid, int exitstatus);
 static void HandleChildCrash(int pid, int exitstatus, const char *procname);
@@ -412,13 +442,10 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
 static void report_fork_failure_to_client(Port *port, int errnum);
-static CAC_state canAcceptConnections(void);
 static bool RandomCancelKey(int32 *cancel_key);
 static void signal_child(pid_t pid, int signal);
 static bool SignalSomeChildren(int signal, int targets);
@@ -486,6 +513,7 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket sessionsocket;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -988,6 +1016,11 @@ PostmasterMain(int argc, char *argv[])
 	ApplyLauncherRegister();
 
 	/*
+	 * Register connnection pool workers
+	 */
+	RegisterConnPoolWorkers();
+
+	/*
 	 * process any libraries that should be preloaded at postmaster start
 	 */
 	process_shared_preload_libraries();
@@ -1613,6 +1646,177 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+static bool
+IsDedicatedDatabase(char const* dbname)
+{
+	List       *namelist;
+	ListCell   *l;
+	char       *databases;
+	bool       found = false;
+
+    /* Need a modifiable copy of namespace_search_path string */
+	databases = pstrdup(DedicatedDatabases);
+
+	if (!SplitIdentifierString(databases, ',', &namelist)) {
+		elog(ERROR, "invalid list syntax");
+	}
+	foreach(l, namelist)
+	{
+		char *curname = (char *) lfirst(l);
+		if (strcmp(curname, dbname) == 0)
+		{
+			found = true;
+			break;
+		}
+	}
+	list_free(namelist);
+	pfree(databases);
+
+	return found;
+}
+
+/*
+ * Find free worker and send socket
+ */
+static void
+SendPortToConnectionPool(Port *port)
+{
+	int		i;
+	bool	sent;
+
+	/* By default is not dedicated */
+	IsDedicatedBackend = false;
+
+	sent = false;
+
+again:
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+		if (worker->pid == 0)
+			continue;
+
+		if (worker->state == CPW_PROCESSED)
+		{
+			Port *conn = PoolConnCreate(worker->pipes[0], i);
+			if (conn)
+				ConnDispatch(conn);
+		}
+		if (worker->state == CPW_FREE)
+		{
+			worker->port = port;
+			worker->state = CPW_NEW_SOCKET;
+			worker->cac_state = canAcceptConnections();
+
+			if (pg_send_sock(worker->pipes[0], port->sock, worker->pid) < 0)
+			{
+				elog(LOG, "could not send socket to connection pool: %m");
+				ExitPostmaster(1);
+			}
+			SetLatch(worker->latch);
+			sent = true;
+			break;
+		}
+	}
+
+	if (!sent)
+	{
+		pg_usleep(1000L);
+		goto again;
+	}
+}
+
+static void
+ConnDispatch(Port *port)
+{
+	bool			found;
+	DatabasePool   *pool;
+	struct DatabasePoolKey	key;
+
+	Assert(port->sock != PGINVALID_SOCKET);
+	if (IsDedicatedDatabase(port->database_name))
+	{
+		IsDedicatedBackend = true;
+		BackendStartup(NULL, port);
+		goto cleanup;
+	}
+
+#ifdef USE_SSL
+	if (port->ssl_in_use)
+	{
+		/*
+		 * We don't (yet) support SSL connections with connection pool,
+		 * since we need to move whole SSL context to already working
+		 * backend. This task needs more investigation.
+		 */
+		elog(ERROR, "connection pool does not support SSL connections");
+		goto cleanup;
+	}
+#endif
+	MemSet(key.database, 0, NAMEDATALEN);
+	MemSet(key.username, 0, NAMEDATALEN);
+
+	strlcpy(key.database, port->database_name, NAMEDATALEN);
+	strlcpy(key.username, port->user_name, NAMEDATALEN);
+
+	pool = hash_search(PostmasterSessionPool.pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		pool->key = key;
+		pool->workers = NULL;
+		pool->n_workers = 0;
+		pool->rr_index = 0;
+	}
+
+	BackendStartup(pool, port);
+
+cleanup:
+	/*
+	 * We no longer need the open socket or port structure
+	 * in this process
+	 */
+	StreamClose(port->sock);
+	ConnFree(port);
+}
+
+/*
+ * Init wait event set for connection pool workers,
+ * and hash table for backends in pool.
+ */
+static int
+InitConnPoolState(fd_set *rmask, int numSockets)
+{
+	int			i;
+	HASHCTL		ctl;
+
+	/*
+	 * create hashtable that indexes the relcache
+	 */
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(struct DatabasePoolKey);
+	ctl.entrysize = sizeof(DatabasePool);
+	ctl.hcxt = PostmasterContext;
+	PostmasterSessionPool.pools = hash_create("Pool by database and user", 100,
+								  &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+		worker->port = NULL;
+
+		/*
+		 * we use same pselect(3) call for connection pool workers and
+		 * clients
+		 */
+		ListenSocket[MAXLISTEN + i] = worker->pipes[0];
+		FD_SET(worker->pipes[0], rmask);
+		if (worker->pipes[0] > numSockets)
+			numSockets = worker->pipes[0];
+	}
+
+	return numSockets + 1;
+}
+
 /*
  * Main idle loop of postmaster
  *
@@ -1630,6 +1834,9 @@ ServerLoop(void)
 
 	nSockets = initMasks(&readmask);
 
+	if (SessionPoolSize > 0)
+		nSockets = InitConnPoolState(&readmask, nSockets);
+
 	for (;;)
 	{
 		fd_set		rmask;
@@ -1690,27 +1897,43 @@ ServerLoop(void)
 		 */
 		if (selres > 0)
 		{
+			Port	   *port;
 			int			i;
 
+			/* Check for client connections */
 			for (i = 0; i < MAXLISTEN; i++)
 			{
 				if (ListenSocket[i] == PGINVALID_SOCKET)
 					break;
 				if (FD_ISSET(ListenSocket[i], &rmask))
 				{
-					Port	   *port;
-
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
-						/*
-						 * We no longer need the open socket or port structure
-						 * in this process
-						 */
-						StreamClose(port->sock);
-						ConnFree(port);
+						if (SessionPoolSize == 0)
+						{
+							IsDedicatedBackend = true;
+							BackendStartup(NULL, port);
+							StreamClose(port->sock);
+							ConnFree(port);
+						}
+						else
+							SendPortToConnectionPool(port);
+					}
+				}
+			}
+
+			/* Check for some data from connections pool */
+			if (SessionPoolSize > 0)
+			{
+				for (i = 0; i < NumConnPoolWorkers; i++)
+				{
+					if (FD_ISSET(ListenSocket[MAXLISTEN + i], &rmask))
+					{
+						port = PoolConnCreate(ListenSocket[MAXLISTEN + i], i);
+						if (port)
+							ConnDispatch(port);
+
 					}
 				}
 			}
@@ -1893,13 +2116,15 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx,
+						int errlevel)
 {
 	int32		len;
 	void	   *buf;
 	ProtocolVersion proto;
-	MemoryContext oldcontext;
+	MemoryContext oldcontext = MemoryContextSwitchTo(memctx);
+	int			result;
 
 	pq_startmsgread();
 	if (pq_getbytes((char *) &len, 4) == EOF)
@@ -1992,7 +2217,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx, errlevel);
 	}
 
 	/* Could add additional special packet types here */
@@ -2006,13 +2231,16 @@ retry1:
 	/* Check that the major protocol version is in range. */
 	if (PG_PROTOCOL_MAJOR(proto) < PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST) ||
 		PG_PROTOCOL_MAJOR(proto) > PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST))
-		ereport(FATAL,
+	{
+		ereport(errlevel,
 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				 errmsg("unsupported frontend protocol %u.%u: server supports %u.0 to %u.%u",
 						PG_PROTOCOL_MAJOR(proto), PG_PROTOCOL_MINOR(proto),
 						PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST),
 						PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST),
 						PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST))));
+		return STATUS_ERROR;
+	}
 
 	/*
 	 * Now fetch parameters out of startup packet and save them into the Port
@@ -2022,7 +2250,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2070,12 +2298,15 @@ retry1:
 					am_db_walsender = true;
 				}
 				else if (!parse_bool(valptr, &am_walsender))
-					ereport(FATAL,
+				{
+					ereport(errlevel,
 							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 							 errmsg("invalid value for parameter \"%s\": \"%s\"",
 									"replication",
 									valptr),
 							 errhint("Valid values are: \"false\", 0, \"true\", 1, \"database\".")));
+					return STATUS_ERROR;
+				}
 			}
 			else if (strncmp(nameptr, "_pq_.", 5) == 0)
 			{
@@ -2103,9 +2334,12 @@ retry1:
 		 * given packet length, complain.
 		 */
 		if (offset != len - 1)
-			ereport(FATAL,
+		{
+			ereport(errlevel,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("invalid startup packet layout: expected terminator as last byte")));
+			return STATUS_ERROR;
+		}
 
 		/*
 		 * If the client requested a newer protocol version or if the client
@@ -2141,9 +2375,12 @@ retry1:
 
 	/* Check a user name was given. */
 	if (port->user_name == NULL || port->user_name[0] == '\0')
-		ereport(FATAL,
+	{
+		ereport(errlevel,
 				(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
 				 errmsg("no PostgreSQL user name specified in startup packet")));
+		return STATUS_ERROR;
+	}
 
 	/* The database defaults to the user name. */
 	if (port->database_name == NULL || port->database_name[0] == '\0')
@@ -2197,27 +2434,32 @@ retry1:
 	 * now instead of wasting cycles on an authentication exchange. (This also
 	 * allows a pg_ping utility to be written.)
 	 */
+	result = STATUS_OK;
 	switch (port->canAcceptConnections)
 	{
 		case CAC_STARTUP:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is starting up")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_SHUTDOWN:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is shutting down")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_RECOVERY:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is in recovery mode")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_TOOMANY:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
 					 errmsg("sorry, too many clients already")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_WAITBACKUP:
 			/* OK for now, will check in InitPostgres */
@@ -2226,7 +2468,7 @@ retry1:
 			break;
 	}
 
-	return STATUS_OK;
+	return result;
 }
 
 /*
@@ -2322,7 +2564,7 @@ processCancelRequest(Port *port, void *pkt)
 /*
  * canAcceptConnections --- check to see if database state allows connections.
  */
-static CAC_state
+CAC_state
 canAcceptConnections(void)
 {
 	CAC_state	result = CAC_OK;
@@ -2398,7 +2640,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -2418,6 +2660,66 @@ ConnCreate(int serverFd)
 	return port;
 }
 
+#define CONN_BUF_SIZE 8192
+
+static Port *
+PoolConnCreate(pgsocket poolFd, int workerId)
+{
+	char				recv_buf[CONN_BUF_SIZE];
+	int					recv_len = 0,
+						i,
+						rc,
+						offs,
+						len;
+	StringInfoData		buf;
+	ConnPoolWorker	   *worker = &ConnPoolWorkers[workerId];
+	Port			   *port = worker->port;
+
+	if (worker->state != CPW_PROCESSED)
+		return NULL;
+
+	/* In any case we should free the worker */
+	worker->port = NULL;
+	worker->state = CPW_FREE;
+
+	while ((rc = read(poolFd, &recv_len, sizeof recv_len)) < 0 && errno == EINTR);
+	if (rc != (int)sizeof(recv_len))
+	{
+	  io_error:
+		StreamClose(port->sock);
+		ConnFree(port);
+		return NULL;
+	}
+
+	for (offs = 0; offs < recv_len; offs += rc)
+	{
+		while ((rc = read(poolFd, recv_buf + offs, CONN_BUF_SIZE - offs)) < 0 && errno == EINTR);
+		if (rc <= 0)
+			goto io_error;
+	}
+
+	buf.cursor = 0;
+	buf.data = recv_buf;
+	buf.len = recv_len;
+
+	port->proto = pq_getmsgint(&buf, 4);
+	port->database_name = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+	port->user_name = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+	port->guc_options = NIL;
+
+	/* GUC */
+	len = pq_getmsgint(&buf, 4);
+	for (i = 0; i < len; i++)
+	{
+		char	*val = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+		port->guc_options = lappend(port->guc_options, val);
+	}
+
+	if (pq_getmsgint(&buf, 4) > 0)
+		port->cmdline_options = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+
+	return port;
+}
 
 /*
  * ConnFree -- free a local connection data structure
@@ -2430,6 +2732,12 @@ ConnFree(Port *conn)
 #endif
 	if (conn->gss)
 		free(conn->gss);
+	if (conn->database_name)
+		pfree(conn->database_name);
+	if (conn->user_name)
+		pfree(conn->user_name);
+	if (conn->cmdline_options)
+		pfree(conn->cmdline_options);
 	free(conn);
 }
 
@@ -3185,6 +3493,44 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory.
+ */
+static void
+UnlinkPooledBackend(Backend *bp)
+{
+	DatabasePool	*pool = bp->pool;
+
+	if (!pool ||
+		bp->bkend_type != BACKEND_TYPE_NORMAL ||
+		bp->session_send_sock == PGINVALID_SOCKET)
+		return;
+
+	Assert(pool->n_workers > bp->worker_id &&
+		   pool->workers[bp->worker_id] == bp);
+
+	if (--pool->n_workers != 0)
+	{
+		pool->workers[bp->worker_id] = pool->workers[pool->n_workers];
+		pool->workers[bp->worker_id]->worker_id = bp->worker_id;
+		pool->rr_index %= pool->n_workers;
+	}
+
+	closesocket(bp->session_send_sock);
+	bp->session_send_sock = PGINVALID_SOCKET;
+
+	elog(DEBUG2, "Cleanup backend %d", bp->pid);
+}
+
+static void
+DeleteBackend(Backend *bp)
+{
+	UnlinkPooledBackend(bp);
+
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3261,8 +3607,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			DeleteBackend(bp);
 			break;
 		}
 	}
@@ -3364,8 +3709,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			DeleteBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -3962,16 +4306,42 @@ TerminateChildren(int signal)
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
 static int
-BackendStartup(Port *port)
+BackendStartup(DatabasePool *pool, Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	pgsocket    session_pipe[2];
+
+	/*
+	 * In case of session pooling instead of spawning new backend open
+	 * new session at one of the existed backends.
+	 */
+	while (pool && pool->n_workers >= SessionPoolSize)
+	{
+		Backend *worker = pool->workers[pool->rr_index];
+		pool->rr_index = (pool->rr_index + 1) % pool->n_workers; /* round-robin */
+
+		/* Send connection socket to the worker backend */
+		if (pg_send_sock(worker->session_send_sock, port->sock, worker->pid) < 0)
+		{
+			elog(LOG, "Failed to send session socket %d: %m",
+					worker->session_send_sock);
+			UnlinkPooledBackend(worker);
+			continue;
+		}
+
+		elog(DEBUG2, "Start new session for socket %d at backend %d",
+				port->sock, worker->pid);
+
+		/* TODO: serialize the port and send it through socket */
+		return STATUS_OK;
+	}
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
 	 * handle failure cleanly.
 	 */
-	bn = (Backend *) malloc(sizeof(Backend));
+	bn = (Backend *) calloc(1, sizeof(Backend));
 	if (!bn)
 	{
 		ereport(LOG,
@@ -4012,12 +4382,30 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (!IsDedicatedBackend)
+	{
+		if (socketpair(AF_UNIX, SOCK_STREAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+#ifdef WIN32
+		SessionPoolSock = session_pipe[0];
+#endif
+	}
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		whereToSendOutput = DestNone;
+
+		if (!IsDedicatedBackend)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4026,11 +4414,14 @@ BackendStartup(Port *port)
 		/* Close the postmaster's sockets */
 		ClosePostmasterPorts(false);
 
-		/* Perform additional initialization and collect startup packet */
+		/* Perform additional initialization */
 		BackendInitialize(port);
 
 		/* And run the backend */
 		BackendRun(port);
+
+		/* Unreachable */
+		Assert(false);
 	}
 #endif							/* EXEC_BACKEND */
 
@@ -4041,6 +4432,7 @@ BackendStartup(Port *port)
 
 		if (!bn->dead_end)
 			(void) ReleasePostmasterChildSlot(bn->child_slot);
+
 		free(bn);
 		errno = save_errno;
 		ereport(LOG,
@@ -4059,9 +4451,27 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
+	bn->pool = pool;
 	dlist_push_head(&BackendList, &bn->elem);
 
+	if (!IsDedicatedBackend)
+	{
+		/* Use this socket for sending client session socket descriptor */
+		bn->session_send_sock = session_pipe[1];
+
+		/* Close unused end of the pipe */
+		closesocket(session_pipe[0]);
+
+		if (pool->workers == NULL)
+			pool->workers = (Backend **) calloc(sizeof(Backend *), SessionPoolSize);
+
+		bn->worker_id = pool->n_workers++;
+		pool->workers[bn->worker_id] = bn;
+
+		elog(DEBUG1, "Start %d-th worker with pid %d", pool->n_workers, pid);
+	}
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4122,6 +4532,7 @@ BackendInitialize(Port *port)
 
 	/* Save port etc. for ps status */
 	MyProcPort = port;
+	FrontendProtocol = port->proto;
 
 	/*
 	 * PreAuthDelay is a debugging aid for investigating problems in the
@@ -4148,7 +4559,10 @@ BackendInitialize(Port *port)
 	 * Initialize libpq and enable reporting of ereport errors to the client.
 	 * Must do this now because authentication uses libpq to send messages.
 	 */
-	pq_init();					/* initialize libpq to talk to client */
+	port->pqcomm_state = pq_init(TopMemoryContext);   /* initialize libpq to talk to client */
+	port->pqcomm_waitset = pq_create_backend_event_set(TopMemoryContext, port, false);
+	pq_set_current_state(port->pqcomm_state, port, port->pqcomm_waitset);
+
 	whereToSendOutput = DestRemote; /* now safe to ereport to client */
 
 	/*
@@ -4227,35 +4641,46 @@ BackendInitialize(Port *port)
 		port->remote_hostname = strdup(remote_host);
 
 	/*
-	 * Ready to begin client interaction.  We will give up and exit(1) after a
-	 * time delay, so that a broken client can't hog a connection
-	 * indefinitely.  PreAuthDelay and any DNS interactions above don't count
-	 * against the time limit.
-	 *
-	 * Note: AuthenticationTimeout is applied here while waiting for the
-	 * startup packet, and then again in InitPostgres for the duration of any
-	 * authentication operations.  So a hostile client could tie up the
-	 * process for nearly twice AuthenticationTimeout before we kick him off.
-	 *
-	 * Note: because PostgresMain will call InitializeTimeouts again, the
-	 * registration of STARTUP_PACKET_TIMEOUT will be lost.  This is okay
-	 * since we never use it again after this function.
+	 * Read startup backend only if we don't use session pool
 	 */
-	RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
-	enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
+	if (IsDedicatedBackend && !port->proto)
+	{
+		/*
+		 * Ready to begin client interaction.  We will give up and exit(1) after a
+		 * time delay, so that a broken client can't hog a connection
+		 * indefinitely.  PreAuthDelay and any DNS interactions above don't count
+		 * against the time limit.
+		 *
+		 * Note: AuthenticationTimeout is applied here while waiting for the
+		 * startup packet, and then again in InitPostgres for the duration of any
+		 * authentication operations.  So a hostile client could tie up the
+		 * process for nearly twice AuthenticationTimeout before we kick him off.
+		 *
+		 * Note: because PostgresMain will call InitializeTimeouts again, the
+		 * registration of STARTUP_PACKET_TIMEOUT will be lost.  This is okay
+		 * since we never use it again after this function.
+		 */
+		RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
+		enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
 
-	/*
-	 * Receive the startup packet (which might turn out to be a cancel request
-	 * packet).
-	 */
-	status = ProcessStartupPacket(port, false);
+		/*
+		 * Receive the startup packet (which might turn out to be a cancel request
+		 * packet).
+		 */
+		status = ProcessStartupPacket(port, false, TopMemoryContext, FATAL);
 
-	/*
-	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
-	 * already did any appropriate error reporting.
-	 */
-	if (status != STATUS_OK)
-		proc_exit(0);
+		/*
+		 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
+		 * already did any appropriate error reporting.
+		 */
+		if (status != STATUS_OK)
+			proc_exit(0);
+
+		/*
+		 * Disable the timeout
+		 */
+		disable_timeout(STARTUP_PACKET_TIMEOUT, false);
+	}
 
 	/*
 	 * Now that we have the user and database name, we can set the process
@@ -4277,9 +4702,8 @@ BackendInitialize(Port *port)
 						update_process_title ? "authentication" : "");
 
 	/*
-	 * Disable the timeout, and prevent SIGTERM/SIGQUIT again.
+	 * Prevent SIGTERM/SIGQUIT again.
 	 */
-	disable_timeout(STARTUP_PACKET_TIMEOUT, false);
 	PG_SETMASK(&BlockSig);
 }
 
@@ -5990,6 +6414,9 @@ save_backend_variables(BackendParameters *param, Port *port,
 	if (!write_inheritable_socket(&param->portsocket, port->sock, childPid))
 		return false;
 
+	if (!write_inheritable_socket(&param->sessionsocket, SessionPoolSock, childPid))
+		return false;
+
 	strlcpy(param->DataDir, DataDir, MAXPGPATH);
 
 	memcpy(&param->ListenSocket, &ListenSocket, sizeof(ListenSocket));
@@ -6222,6 +6649,7 @@ restore_backend_variables(BackendParameters *param, Port *port)
 {
 	memcpy(port, &param->port, sizeof(Port));
 	read_inheritable_socket(&port->sock, &param->portsocket);
+	read_inheritable_socket(&SessionPoolSock, &param->sessionsocket);
 
 	SetDataDir(param->DataDir);
 
diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c
index a85a1c6..9802ca0 100644
--- a/src/backend/storage/ipc/ipc.c
+++ b/src/backend/storage/ipc/ipc.c
@@ -413,3 +413,12 @@ on_exit_reset(void)
 	on_proc_exit_index = 0;
 	reset_on_dsm_detach();
 }
+
+void
+on_shmem_exit_reset(void)
+{
+	before_shmem_exit_index = 0;
+	on_shmem_exit_index = 0;
+	on_proc_exit_index = 0;
+	reset_on_dsm_detach();
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 0c86a58..10e4613 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 		size = add_size(size, SyncScanShmemSize());
 		size = add_size(size, AsyncShmemSize());
 		size = add_size(size, BackendRandomShmemSize());
+		size = add_size(size, ConnPoolShmemSize());
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
@@ -271,6 +273,11 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 	AsyncShmemInit();
 	BackendRandomShmemInit();
 
+	/*
+	 * Set up connection pool workers
+	 */
+	ConnectionPoolWorkersInit();
+
 #ifdef EXEC_BACKEND
 
 	/*
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index f6dda9c..605f054 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,9 +130,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,9 +669,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (latch)
 	{
@@ -690,8 +694,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,15 +733,38 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd)
+{
+	int i, n = set->nevents;
+	for (i = 0; i < n; i++)
+	{
+		WaitEvent  *event = &set->events[i];
+		if (event->fd == fd)
+		{
+#if defined(WAIT_USE_EPOLL)
+			WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+			WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+			WaitEventAdjustWin32(set, event, true);
+#endif
+			break;
+		}
+	}
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -774,9 +812,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -822,19 +860,38 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
+	Assert(rc >= 0);
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -865,9 +922,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -880,7 +953,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -897,8 +970,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1296,7 +1369,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 6f9aaa5..5356000 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -597,6 +597,15 @@ InitAuxiliaryProcess(void)
 }
 
 /*
+ * Generate unique session ID.
+ */
+uint32
+CreateSessionId(void)
+{
+	return ++SessionPool->sessionCount;
+}
+
+/*
  * Record the PID and PGPROC structures for the Startup process, for use in
  * ProcSendSignal().  See comments there for further explanation.
  */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 7a9ada2..42017e4 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -40,6 +40,7 @@
 #include "access/printtup.h"
 #include "access/xact.h"
 #include "catalog/pg_type.h"
+#include "catalog/namespace.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
@@ -77,9 +78,12 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
+#include "utils/varlena.h"
+#include "utils/inval.h"
+#include "utils/catcache.h"
 #include "mb/pg_wchar.h"
 
-
 /* ----------------
  *		global variables
  * ----------------
@@ -100,6 +104,41 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */
+pgsocket SessionPoolSock = PGINVALID_SOCKET;
+
+/* Pointer to pool of sessions */
+BackendSessionPool	   *SessionPool = NULL;
+
+/* Pointer to the active session */
+SessionContext		   *ActiveSession;
+SessionContext		    DefaultContext;
+bool					IsDedicatedBackend = false;
+
+#define SessionVariable(type,name,init)  type name = init;
+#include "storage/sessionvars.h"
+
+static void SaveSessionVariables(SessionContext* session)
+{
+	if (session != NULL)
+	{
+#define SessionVariable(type,name,init) session->name = name;
+#include "storage/sessionvars.h"
+	}
+}
+
+static void LoadSessionVariables(SessionContext* session)
+{
+#define SessionVariable(type,name,init) name = session->name;
+#include "storage/sessionvars.h"
+}
+
+static void InitializeSessionVariables(SessionContext* session)
+{
+#define SessionVariable(type,name,init) session->name = DefaultContext.name;
+#include "storage/sessionvars.h"
+}
+
 
 
 /* ----------------
@@ -171,6 +210,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static bool IdleInTransactionSessionError;
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -196,6 +237,8 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+static void DeleteSession(SessionContext *session);
+static void ResetCurrentSession(void);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1234,10 +1277,6 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
-	/*
-	 * Report query to various monitoring facilities.
-	 */
-	debug_query_string = query_string;
 
 	pgstat_report_activity(STATE_RUNNING, query_string);
 
@@ -2930,9 +2969,29 @@ ProcessInterrupts(void)
 		LockErrorCleanup();
 		/* don't send to client, we already know the connection to be dead. */
 		whereToSendOutput = DestNone;
-		ereport(FATAL,
-				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("connection to client lost")));
+
+		if (ActiveSession)
+		{
+			Port *port = ActiveSession->port;
+			DeleteWaitEventFromSet(SessionPool->waitEvents, port->sock);
+
+			elog(LOG, "Lost connection on session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+			closesocket(port->sock);
+			port->sock = PGINVALID_SOCKET;
+
+			MyProcPort = NULL;
+
+			StartTransactionCommand();
+			UserAbortTransactionBlock();
+			CommitTransactionCommand();
+
+			ResetCurrentSession();
+		}
+		else
+			ereport(FATAL,
+					(errcode(ERRCODE_CONNECTION_FAILURE),
+					 errmsg("connection to client lost")));
 	}
 
 	/*
@@ -3043,9 +3102,20 @@ ProcessInterrupts(void)
 	{
 		/* Has the timeout setting changed since last we looked? */
 		if (IdleInTransactionSessionTimeout > 0)
-			ereport(FATAL,
-					(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
-					 errmsg("terminating connection due to idle-in-transaction timeout")));
+		{
+			if (ActiveSession)
+			{
+				IdleInTransactionSessionTimeoutPending = false;
+				IdleInTransactionSessionError = true;
+				ereport(ERROR,
+						(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+						 errmsg("canceling current transaction due to idle-in-transaction timeout")));
+			}
+			else
+				ereport(FATAL,
+						(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+						 errmsg("terminating connection due to idle-in-transaction timeout")));
+		}
 		else
 			IdleInTransactionSessionTimeoutPending = false;
 
@@ -3605,6 +3675,97 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
+#define ACTIVE_SESSION_MAGIC  0xDEFA1234U
+#define REMOVED_SESSION_MAGIC 0xDEADDEEDU
+
+static SessionContext *
+CreateSession(void)
+{
+	SessionContext *session = (SessionContext *)
+		MemoryContextAllocZero(SessionPool->mcxt, sizeof(SessionContext));
+
+	session->memory = AllocSetContextCreate(SessionPool->mcxt,
+		"SessionMemoryContext", ALLOCSET_DEFAULT_SIZES);
+	session->prepared_queries = NULL;
+	session->id = CreateSessionId();
+	session->portals = CreatePortalsHashTable(session->memory);
+	session->magic = ACTIVE_SESSION_MAGIC;
+	return session;
+}
+
+static void
+SwitchToSession(SessionContext *session)
+{
+	/* epoll may return even for already closed session if socket is still openned.
+	 * From epoll documentation:
+	 * Q6  Will closing a file descriptor cause it to be removed from all epoll sets automatically?
+	 *
+     * A6  Yes, but be aware of the following point.  A file descriptor is a reference to an open file description (see
+     *     open(2)).  Whenever a descriptor is duplicated via dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new file
+     *     descriptor referring to the same open file description is created.  An open file  description  continues  to
+     *     exist  until  all  file  descriptors referring to it have been closed.  A file descriptor is removed from an
+     *     epoll set only after all the file descriptors referring to the underlying open file  description  have  been
+     *     closed  (or  before  if  the descriptor is explicitly removed using epoll_ctl(2) EPOLL_CTL_DEL).  This means
+     *     that even after a file descriptor that is part of an epoll set has been closed, events may be  reported  for
+     *     that  file  descriptor  if  other  file descriptors referring to the same underlying file description remain
+     *     open.
+     *
+     *     Using this check for valid magic field we try to ignore such events.
+	 */
+	if (ActiveSession == session || session->magic != ACTIVE_SESSION_MAGIC)
+		return;
+
+	SaveSessionVariables(ActiveSession);
+	RestoreSessionGUCs(ActiveSession);
+	ActiveSession = session;
+
+	MyProcPort = ActiveSession->port;
+	SetTempNamespaceState(ActiveSession->tempNamespace,
+						  ActiveSession->tempToastNamespace);
+	pq_set_current_state(session->port->pqcomm_state, session->port,
+						 session->eventSet);
+	whereToSendOutput = DestRemote;
+
+	RestoreSessionGUCs(ActiveSession);
+	LoadSessionVariables(ActiveSession);
+}
+
+static void
+ResetCurrentSession(void)
+{
+	if (!ActiveSession)
+		return;
+
+	whereToSendOutput = DestNone;
+	DeleteSession(ActiveSession);
+	pq_set_current_state(NULL, NULL, NULL);
+	SetTempNamespaceState(InvalidOid, InvalidOid);
+	ActiveSession = NULL;
+}
+
+/*
+ * Free all memory associated with session and delete session object itself.
+ */
+static void
+DeleteSession(SessionContext *session)
+{
+	elog(DEBUG1, "Delete session %p, id=%u,  memory context=%p",
+			session, session->id, session->memory);
+
+	if (OidIsValid(session->tempNamespace))
+		ResetTempTableNamespace(session->tempNamespace);
+
+	DropAllPreparedStatements();
+	FreeWaitEventSet(session->eventSet);
+	RestoreSessionGUCs(session);
+	ReleaseSessionGUCs(session);
+	MemoryContextDelete(session->memory);
+	session->magic = REMOVED_SESSION_MAGIC;
+	pfree(session);
+
+	on_shmem_exit_reset();
+	pgstat_report_stat(true);
+}
 
 /* ----------------------------------------------------------------
  * PostgresMain
@@ -3656,6 +3817,33 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Serve all conections to "postgres" database by dedicated backends */
+	if (IsDedicatedBackend)
+	{
+		SessionPoolSize = 0;
+		closesocket(SessionPoolSock);
+		SessionPoolSock = PGINVALID_SOCKET;
+	}
+
+	if (IsUnderPostmaster && !IsDedicatedBackend)
+	{
+		elog(DEBUG1, "Session pooling is active on %s database", dbname);
+
+		/* Initialize sessions pool for this backend */
+		Assert(SessionPool == NULL);
+		SessionPool = (BackendSessionPool *) MemoryContextAllocZero(
+				TopMemoryContext, sizeof(BackendSessionPool));
+		SessionPool->mcxt = AllocSetContextCreate(TopMemoryContext,
+			"SessionPoolContext", ALLOCSET_DEFAULT_SIZES);
+
+		/* Save the original backend port here */
+		SessionPool->backendPort = MyProcPort;
+
+		ActiveSession = CreateSession();
+		ActiveSession->port = MyProcPort;
+		ActiveSession->eventSet = pq_get_current_waitset();
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3784,7 +3972,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -3922,7 +4110,8 @@ PostgresMain(int argc, char *argv[],
 		pq_comm_reset();
 
 		/* Report the error to the client and/or server log */
-		EmitErrorReport();
+		if (MyProcPort)
+			EmitErrorReport();
 
 		/*
 		 * Make sure debug_query_string gets reset before we possibly clobber
@@ -3982,13 +4171,26 @@ PostgresMain(int argc, char *argv[],
 		 * messages from the client, so there isn't much we can do with the
 		 * connection anymore.
 		 */
-		if (pq_is_reading_msg())
+		if (pq_is_reading_msg() && !ActiveSession)
 			ereport(FATAL,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("terminating connection because protocol synchronization was lost")));
 
 		/* Now we can allow interrupts again */
 		RESUME_INTERRUPTS();
+
+		if (ActiveSession)
+		{
+			if (IdleInTransactionSessionError || (IsAbortedTransactionBlockState() && pq_is_reading_msg()))
+			{
+				StartTransactionCommand();
+				UserAbortTransactionBlock();
+				CommitTransactionCommand();
+				IdleInTransactionSessionError = false;
+			}
+			if (pq_is_reading_msg())
+				goto CloseSession;
+		}
 	}
 
 	/* We can now handle ereport(ERROR) */
@@ -3997,10 +4199,30 @@ PostgresMain(int argc, char *argv[],
 	if (!ignore_till_sync)
 		send_ready_for_query = true;	/* initially, or after error */
 
+
+	/* Initialize wait event set if we're using sessions pool */
+	if (SessionPool && SessionPool->waitEvents == NULL)
+	{
+		/* Construct wait event set if not constructed yet */
+		SessionPool->waitEvents = CreateWaitEventSet(SessionPool->mcxt, MaxSessions + 3);
+		/* Add event to detect postmaster death */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_POSTMASTER_DEATH,
+				PGINVALID_SOCKET, NULL, ActiveSession);
+		/* Add event for backends latch */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_LATCH_SET,
+				PGINVALID_SOCKET, MyLatch, ActiveSession);
+		/* Add event for accepting new sessions */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+				SessionPoolSock, NULL, ActiveSession);
+		/* Add event for current session */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+				ActiveSession->port->sock, NULL, ActiveSession);
+		SaveSessionVariables(&DefaultContext);
+	}
+
 	/*
 	 * Non-error queries loop here.
 	 */
-
 	for (;;)
 	{
 		/*
@@ -4076,6 +4298,130 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling,
+			 * rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET
+					&& !IsTransactionState()
+					&& !IsAbortedTransactionBlockState()
+					&& pq_available_bytes() == 0)
+			{
+				WaitEvent ready_client;
+
+ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */
+				if (WaitEventSetWait(SessionPool->waitEvents, -1,
+							&ready_client, 1, PG_WAIT_CLIENT) != 1)
+				{
+					/* TODO: do some error recovery here */
+					elog(FATAL, "Failed to poll client sessions");
+				}
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client.events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client.events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client.fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock == PGINVALID_SOCKET)
+						elog(ERROR, "Failed to receive session socket: %m");
+
+					session = CreateSession();
+
+					/* Initialize port and wait event set for this session */
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					MyProcPort = port = palloc(sizeof(Port));
+					memcpy(port, SessionPool->backendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be
+					 * a cancel request packet).
+					 */
+					port->sock = sock;
+					port->pqcomm_state = pq_init(session->memory);
+
+					session->port = port;
+					session->eventSet =
+						pq_create_backend_event_set(session->memory, port, false);
+					pq_set_current_state(session->port->pqcomm_state,
+										 port,
+										 session->eventSet);
+					whereToSendOutput = DestRemote;
+
+					MemoryContextSwitchTo(oldcontext);
+
+					if (AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+								sock, NULL, session) < 0)
+					{
+						elog(WARNING, "Too much pooled sessions: %d", MaxSessions);
+						DeleteSession(session);
+						ActiveSession = NULL;
+						closesocket(sock);
+						goto ChooseSession;
+					}
+
+					elog(DEBUG1, "Start new session %d in backend %d "
+						"for database %s user %s", (int)sock, MyProcPid,
+						port->database_name, port->user_name);
+
+					SaveSessionVariables(ActiveSession);
+					RestoreSessionGUCs(ActiveSession);
+					ActiveSession = session;
+					InitializeSessionVariables(session);
+					LoadSessionVariables(session);
+					SetCurrentStatementStartTimestamp();
+					StartTransactionCommand();
+					PerformAuthentication(MyProcPort);
+					process_settings(MyDatabaseId, GetSessionUserId());
+					CommitTransactionCommand();
+					SetTempNamespaceState(InvalidOid, InvalidOid);
+
+					/*
+					 * Send GUC options to the client
+					 */
+					BeginReportingGUCOptions();
+
+					/*
+					 * Send this backend's cancellation info to the frontend.
+					 */
+					pq_beginmessage(&buf, 'K');
+					pq_sendint(&buf, (int32) MyProcPid, 4);
+					pq_sendint(&buf, (int32) MyCancelKey, 4);
+					pq_endmessage(&buf);
+
+					/* Need not flush since ReadyForQuery will do it. */
+					send_ready_for_query = true;
+
+					continue;
+				}
+				else
+				{
+					SessionContext* session = (SessionContext *) ready_client.user_data;
+					SwitchToSession(session);
+				}
+			}
 		}
 
 		/*
@@ -4118,6 +4464,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (ActiveSession && RestartPoolerOnReload)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
@@ -4355,6 +4703,46 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+CloseSession:
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					if (MyProcPort)
+					{
+						elog(DEBUG1, "Closing session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+						DeleteWaitEventFromSet(SessionPool->waitEvents, MyProcPort->sock);
+
+						pq_getmsgend(&input_message);
+						if (pq_is_reading_msg())
+							pq_endmsgread();
+
+						closesocket(MyProcPort->sock);
+						MyProcPort->sock = PGINVALID_SOCKET;
+						MyProcPort = NULL;
+					}
+
+					if (ActiveSession)
+					{
+						StartTransactionCommand();
+						UserAbortTransactionBlock();
+						CommitTransactionCommand();
+
+						ResetCurrentSession();
+					}
+
+					/* Need to perform rescheduling to some other session or accept new session */
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index e95e347..6726195 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -875,6 +875,17 @@ pg_backend_pid(PG_FUNCTION_ARGS)
 	PG_RETURN_INT32(MyProcPid);
 }
 
+Datum
+pg_session_id(PG_FUNCTION_ARGS)
+{
+	char	*s;
+	if (ActiveSession)
+		s = psprintf("%d.%u", MyProcPid, ActiveSession->id);
+	else
+		s = psprintf("%d", MyProcPid);
+
+	PG_RETURN_TEXT_P(CStringGetTextDatum(s));
+}
 
 Datum
 pg_stat_get_backend_pid(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index 7271b58..6b0cb54 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -61,6 +61,7 @@
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
 #include "storage/lmgr.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/inval.h"
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 6125421..7ce5671 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -78,6 +78,7 @@
 #include "rewrite/rewriteDefine.h"
 #include "rewrite/rowsecurity.h"
 #include "storage/lmgr.h"
+#include "storage/proc.h"
 #include "storage/smgr.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
@@ -1943,6 +1944,13 @@ RelationIdGetRelation(Oid relationId)
 			Assert(rd->rd_isvalid ||
 				   (rd->rd_isnailed && !criticalRelcachesBuilt));
 		}
+		/*
+		 * In case of session pooling, relation descriptor can be constructed by some other session,
+		 * so we need to recheck rd_islocaltemp value
+		 */
+		if (ActiveSession && RELATION_IS_OTHER_TEMP(rd) && isTempOrTempToastNamespace(rd->rd_rel->relnamespace))
+			rd->rd_islocaltemp = true;
+
 		return rd;
 	}
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index f7d6617..0617f4b 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -128,7 +128,9 @@ int			max_parallel_maintenance_workers = 2;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
@@ -147,3 +149,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+
+bool        RestartPoolerOnReload = false;
+char       *DedicatedDatabases;
diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c
index 865119d..715429a 100644
--- a/src/backend/utils/init/miscinit.c
+++ b/src/backend/utils/init/miscinit.c
@@ -250,19 +250,6 @@ ChangeToDataDir(void)
  * convenient way to do it.
  * ----------------------------------------------------------------
  */
-static Oid	AuthenticatedUserId = InvalidOid;
-static Oid	SessionUserId = InvalidOid;
-static Oid	OuterUserId = InvalidOid;
-static Oid	CurrentUserId = InvalidOid;
-
-/* We also have to remember the superuser state of some of these levels */
-static bool AuthenticatedUserIsSuperuser = false;
-static bool SessionUserIsSuperuser = false;
-
-static int	SecurityRestrictionContext = 0;
-
-/* We also remember if a SET ROLE is currently active */
-static bool SetRoleIsActive = false;
 
 /*
  * Initialize the basic environment for a postmaster child
@@ -345,13 +332,15 @@ InitStandaloneProcess(const char *argv0)
 void
 SwitchToSharedLatch(void)
 {
+	WaitEventSet *waitset;
 	Assert(MyLatch == &LocalLatchData);
 	Assert(MyProc != NULL);
 
 	MyLatch = &MyProc->procLatch;
 
-	if (FeBeWaitSet)
-		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+	waitset = pq_get_current_waitset();
+	if (waitset)
+		ModifyWaitEvent(waitset, 1, WL_LATCH_SET, MyLatch);
 
 	/*
 	 * Set the shared latch as the local one might have been set. This
@@ -364,13 +353,15 @@ SwitchToSharedLatch(void)
 void
 SwitchBackToLocalLatch(void)
 {
+	WaitEventSet *waitset;
 	Assert(MyLatch != &LocalLatchData);
 	Assert(MyProc != NULL && MyLatch == &MyProc->procLatch);
 
 	MyLatch = &LocalLatchData;
 
-	if (FeBeWaitSet)
-		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+	waitset = pq_get_current_waitset();
+	if (waitset)
+		ModifyWaitEvent(waitset, 1, WL_LATCH_SET, MyLatch);
 
 	SetLatch(MyLatch);
 }
@@ -434,6 +425,8 @@ SetSessionUserId(Oid userid, bool is_superuser)
 	/* We force the effective user IDs to match, too */
 	OuterUserId = userid;
 	CurrentUserId = userid;
+
+	SysCacheInvalidate(AUTHOID, 0);
 }
 
 /*
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 5ef6315..f1d6834 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -62,10 +62,8 @@
 #include "utils/timeout.h"
 #include "utils/tqual.h"
 
-
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -74,7 +72,6 @@ static void LockTimeoutHandler(void);
 static void IdleInTransactionSessionTimeoutHandler(void);
 static bool ThereIsAtLeastOneRole(void);
 static void process_startup_options(Port *port, bool am_superuser);
-static void process_settings(Oid databaseid, Oid roleid);
 
 
 /*** InitPostgres support ***/
@@ -180,7 +177,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
@@ -1126,7 +1123,7 @@ process_startup_options(Port *port, bool am_superuser)
  * We try specific settings for the database/role combination, as well as
  * general for this database and for this user.
  */
-static void
+void
 process_settings(Oid databaseid, Oid roleid)
 {
 	Relation	relsetting;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 0625eff..f435356 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -59,6 +59,7 @@
 #include "postmaster/autovacuum.h"
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
+#include "postmaster/connpool.h"
 #include "postmaster/postmaster.h"
 #include "postmaster/syslogger.h"
 #include "postmaster/walwriter.h"
@@ -587,6 +588,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Connection Pooling"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1192,6 +1195,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -1998,8 +2011,41 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
-		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_pool_workers", PGC_POSTMASTER, CONN_POOLING,
+		 gettext_noop("Number of connection pool workers"),
+		 NULL,
+	    },
+		&NumConnPoolWorkers,
+		2, 0, MAX_CONNPOOL_WORKERS,
+		NULL, NULL, NULL
+	},
+
+	{
+	/* see max_connections and max_wal_senders */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
@@ -3340,9 +3386,9 @@ static struct config_string ConfigureNamesString[] =
 
 	{
 		{"temp_tablespaces", PGC_USERSET, CLIENT_CONN_STATEMENT,
-			gettext_noop("Sets the tablespace(s) to use for temporary tables and sort files."),
-			NULL,
-			GUC_LIST_INPUT | GUC_LIST_QUOTE
+		    gettext_noop("Sets the tablespace(s) to use for temporary tables and sort files."),
+		    NULL,
+		    GUC_LIST_INPUT | GUC_LIST_QUOTE
 		},
 		&temp_tablespaces,
 		"",
@@ -3350,6 +3396,16 @@ static struct config_string ConfigureNamesString[] =
 	},
 
 	{
+		{"dedicated_databases", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Set of databases for which session pooling is disabled."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE
+		},
+		&DedicatedDatabases,
+		"template0, template1, postgres"
+	},
+
+	{
 		{"dynamic_library_path", PGC_SUSET, CLIENT_CONN_OTHER,
 			gettext_noop("Sets the path for dynamically loadable modules."),
 			gettext_noop("If a dynamically loadable module needs to be opened and "
@@ -5346,6 +5402,164 @@ NewGUCNestLevel(void)
 }
 
 /*
+ * Save changed variables after SET command.
+ * It's important to restore variables as we add them to the list.
+ */
+static void
+SaveSessionGUCs(SessionContext *session,
+				struct config_generic *gconf,
+				config_var_value *prior_val)
+{
+	SessionGUC	*sg;
+
+	/* Find needed GUC in active session */
+	for (sg = session->gucs;
+			sg != NULL && sg->var != gconf; sg = sg->next);
+
+	if (sg != NULL)
+		/* already there */
+		return;
+
+	sg = MemoryContextAllocZero(session->memory, sizeof(SessionGUC));
+	sg->var = gconf;
+	sg->saved.extra = prior_val->extra;
+
+	switch (gconf->vartype)
+	{
+		case PGC_BOOL:
+			sg->saved.val.boolval = prior_val->val.boolval;
+			break;
+		case PGC_INT:
+			sg->saved.val.intval = prior_val->val.intval;
+			break;
+		case PGC_REAL:
+			sg->saved.val.realval = prior_val->val.realval;
+			break;
+		case PGC_STRING:
+			sg->saved.val.stringval = prior_val->val.stringval;
+			break;
+		case PGC_ENUM:
+			sg->saved.val.enumval = prior_val->val.enumval;
+			break;
+	}
+
+	if (session->gucs)
+	{
+		SessionGUC	*latest;
+
+		/* Move to end of the list */
+		for (latest = session->gucs;
+				latest->next != NULL; latest = latest->next);
+		latest->next = sg;
+	}
+	else
+		session->gucs = sg;
+}
+
+/*
+ * Set GUCs for this session
+ */
+void
+RestoreSessionGUCs(SessionContext* session)
+{
+	SessionGUC	*sg;
+	bool save_reporting_enabled;
+
+	if (session == NULL)
+		return;
+
+	save_reporting_enabled = reporting_enabled;
+	reporting_enabled = false;
+
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		void	*saved_extra = sg->saved.extra;
+		void	*old_extra = sg->var->extra;
+
+		/* restore extra */
+		sg->var->extra = saved_extra;
+		sg->saved.extra = old_extra;
+
+		/* restore actual values */
+		switch (sg->var->vartype)
+		{
+			case PGC_BOOL:
+			{
+				struct config_bool *conf = (struct config_bool *)sg->var;
+				bool oldval = *conf->variable;
+				*conf->variable = sg->saved.val.boolval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.boolval, saved_extra);
+
+				sg->saved.val.boolval = oldval;
+				break;
+			}
+			case PGC_INT:
+			{
+				struct config_int *conf = (struct config_int*) sg->var;
+				int oldval = *conf->variable;
+				*conf->variable = sg->saved.val.intval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.intval, saved_extra);
+				sg->saved.val.intval = oldval;
+				break;
+			}
+			case PGC_REAL:
+			{
+				struct config_real *conf = (struct config_real*) sg->var;
+				double oldval = *conf->variable;
+				*conf->variable = sg->saved.val.realval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.realval, saved_extra);
+				sg->saved.val.realval = oldval;
+				break;
+			}
+			case PGC_STRING:
+			{
+				struct config_string *conf = (struct config_string*) sg->var;
+				char* oldval = *conf->variable;
+				*conf->variable = sg->saved.val.stringval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.stringval, saved_extra);
+				sg->saved.val.stringval = oldval;
+				break;
+			}
+			case PGC_ENUM:
+			{
+				struct config_enum *conf = (struct config_enum*) sg->var;
+				int oldval = *conf->variable;
+				*conf->variable = sg->saved.val.enumval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.enumval, saved_extra);
+				sg->saved.val.enumval = oldval;
+				break;
+			}
+		}
+	}
+	reporting_enabled = save_reporting_enabled;
+}
+
+/*
+ * Deallocate memory for session GUCs
+ */
+void
+ReleaseSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		if (sg->saved.extra)
+			set_extra_field(sg->var, &sg->saved.extra, NULL);
+
+		if (sg->var->vartype == PGC_STRING)
+		{
+			struct config_string* conf = (struct config_string*)sg->var;
+			set_string_field(conf, &sg->saved.val.stringval, NULL);
+		}
+	}
+}
+
+/*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
  * transient assignment to some GUC variables.  (The name is thus a bit of
@@ -5413,8 +5627,10 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 					restoreMasked = true;
 				else if (stack->state == GUC_SET)
 				{
-					/* we keep the current active value */
-					discard_stack_value(gconf, &stack->prior);
+					if (ActiveSession)
+						SaveSessionGUCs(ActiveSession, gconf, &stack->prior);
+					else
+						discard_stack_value(gconf, &stack->prior);
 				}
 				else			/* must be GUC_LOCAL */
 					restorePrior = true;
@@ -5440,8 +5656,8 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 
 					case GUC_SET:
 						/* next level always becomes SET */
-						discard_stack_value(gconf, &stack->prior);
-						if (prev->state == GUC_SET_LOCAL)
+					    discard_stack_value(gconf, &stack->prior);
+					    if (prev->state == GUC_SET_LOCAL)
 							discard_stack_value(gconf, &prev->masked);
 						prev->state = GUC_SET;
 						break;
diff --git a/src/backend/utils/misc/superuser.c b/src/backend/utils/misc/superuser.c
index fbe83c9..1ebc379 100644
--- a/src/backend/utils/misc/superuser.c
+++ b/src/backend/utils/misc/superuser.c
@@ -24,6 +24,7 @@
 #include "catalog/pg_authid.h"
 #include "utils/inval.h"
 #include "utils/syscache.h"
+#include "storage/proc.h"
 #include "miscadmin.h"
 
 
@@ -33,8 +34,6 @@
  * the status of the last requested roleid.  The cache can be flushed
  * at need by watching for cache update events on pg_authid.
  */
-static Oid	last_roleid = InvalidOid;	/* InvalidOid == cache not valid */
-static bool last_roleid_is_super = false;
 static bool roleid_callback_registered = false;
 
 static void RoleidCallback(Datum arg, int cacheid, uint32 hashvalue);
diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c
index 04ea32f..a8c27a3 100644
--- a/src/backend/utils/mmgr/portalmem.c
+++ b/src/backend/utils/mmgr/portalmem.c
@@ -23,6 +23,7 @@
 #include "commands/portalcmds.h"
 #include "miscadmin.h"
 #include "storage/ipc.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/snapmgr.h"
@@ -53,11 +54,14 @@ typedef struct portalhashent
 
 static HTAB *PortalHashTable = NULL;
 
+#define CurrentPortalHashTable() \
+	(ActiveSession ? ActiveSession->portals : PortalHashTable)
+
 #define PortalHashTableLookup(NAME, PORTAL) \
 do { \
 	PortalHashEnt *hentry; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   (NAME), HASH_FIND, NULL); \
 	if (hentry) \
 		PORTAL = hentry->portal; \
@@ -69,7 +73,7 @@ do { \
 do { \
 	PortalHashEnt *hentry; bool found; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   (NAME), HASH_ENTER, &found); \
 	if (found) \
 		elog(ERROR, "duplicate portal name"); \
@@ -82,7 +86,7 @@ do { \
 do { \
 	PortalHashEnt *hentry; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   PORTAL->name, HASH_REMOVE, NULL); \
 	if (hentry == NULL) \
 		elog(WARNING, "trying to delete portal name that does not exist"); \
@@ -90,12 +94,33 @@ do { \
 
 static MemoryContext TopPortalContext = NULL;
 
-
 /* ----------------------------------------------------------------
  *				   public portal interface functions
  * ----------------------------------------------------------------
  */
 
+HTAB *
+CreatePortalsHashTable(MemoryContext mcxt)
+{
+	HASHCTL		ctl;
+	int			flags = HASH_ELEM;
+
+	ctl.keysize = MAX_PORTALNAME_LEN;
+	ctl.entrysize = sizeof(PortalHashEnt);
+
+	if (mcxt)
+	{
+		ctl.hcxt = mcxt;
+		flags |= HASH_CONTEXT;
+	}
+
+	/*
+	 * use PORTALS_PER_USER as a guess of how many hash table entries to
+	 * create, initially
+	 */
+	return hash_create("Portal hash", PORTALS_PER_USER, &ctl, flags);
+}
+
 /*
  * EnablePortalManager
  *		Enables the portal management module at backend startup.
@@ -103,23 +128,13 @@ static MemoryContext TopPortalContext = NULL;
 void
 EnablePortalManager(void)
 {
-	HASHCTL		ctl;
-
 	Assert(TopPortalContext == NULL);
 
 	TopPortalContext = AllocSetContextCreate(TopMemoryContext,
-											 "TopPortalContext",
-											 ALLOCSET_DEFAULT_SIZES);
-
-	ctl.keysize = MAX_PORTALNAME_LEN;
-	ctl.entrysize = sizeof(PortalHashEnt);
+										 "TopPortalContext",
+										 ALLOCSET_DEFAULT_SIZES);
 
-	/*
-	 * use PORTALS_PER_USER as a guess of how many hash table entries to
-	 * create, initially
-	 */
-	PortalHashTable = hash_create("Portal hash", PORTALS_PER_USER,
-								  &ctl, HASH_ELEM);
+	PortalHashTable = CreatePortalsHashTable(NULL);
 }
 
 /*
@@ -602,11 +617,14 @@ PortalHashTableDeleteAll(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	if (PortalHashTable == NULL)
+	htab = CurrentPortalHashTable();
+
+	if (htab == NULL)
 		return;
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 	while ((hentry = hash_seq_search(&status)) != NULL)
 	{
 		Portal		portal = hentry->portal;
@@ -619,7 +637,7 @@ PortalHashTableDeleteAll(void)
 
 		/* Restart the iteration in case that led to other drops */
 		hash_seq_term(&status);
-		hash_seq_init(&status, PortalHashTable);
+		hash_seq_init(&status, htab);
 	}
 }
 
@@ -672,8 +690,10 @@ PreCommit_Portals(bool isPrepare)
 	bool		result = false;
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -746,7 +766,7 @@ PreCommit_Portals(bool isPrepare)
 		 * caused a drop of the next portal in the hash chain.
 		 */
 		hash_seq_term(&status);
-		hash_seq_init(&status, PortalHashTable);
+		hash_seq_init(&status, htab);
 	}
 
 	return result;
@@ -763,8 +783,11 @@ AtAbort_Portals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -840,8 +863,11 @@ AtCleanup_Portals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -899,8 +925,10 @@ PortalErrorCleanup(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -927,8 +955,9 @@ AtSubCommit_Portals(SubTransactionId mySubid,
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -962,8 +991,11 @@ AtSubAbort_Portals(SubTransactionId mySubid,
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1072,8 +1104,9 @@ AtSubCleanup_Portals(SubTransactionId mySubid)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1161,7 +1194,7 @@ pg_cursor(PG_FUNCTION_ARGS)
 	/* generate junk in short-term context */
 	MemoryContextSwitchTo(oldcontext);
 
-	hash_seq_init(&hash_seq, PortalHashTable);
+	hash_seq_init(&hash_seq, CurrentPortalHashTable());
 	while ((hentry = hash_seq_search(&hash_seq)) != NULL)
 	{
 		Portal		portal = hentry->portal;
@@ -1200,7 +1233,7 @@ ThereAreNoReadyPortals(void)
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, CurrentPortalHashTable());
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1229,8 +1262,11 @@ HoldPinnedPortals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
diff --git a/src/include/catalog/namespace.h b/src/include/catalog/namespace.h
index 0e20237..ddcc3c8 100644
--- a/src/include/catalog/namespace.h
+++ b/src/include/catalog/namespace.h
@@ -144,7 +144,9 @@ extern void GetTempNamespaceState(Oid *tempNamespaceId,
 					  Oid *tempToastNamespaceId);
 extern void SetTempNamespaceState(Oid tempNamespaceId,
 					  Oid tempToastNamespaceId);
-extern void ResetTempTableNamespace(void);
+
+struct SessionContext;
+extern void ResetTempTableNamespace(Oid npc);
 
 extern OverrideSearchPath *GetOverrideSearchPath(MemoryContext context);
 extern OverrideSearchPath *CopyOverrideSearchPath(OverrideSearchPath *path);
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a146510..62fb7a4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5202,6 +5202,9 @@
 { oid => '2026', descr => 'statistics: current backend PID',
   proname => 'pg_backend_pid', provolatile => 's', proparallel => 'r',
   prorettype => 'int4', proargtypes => '', prosrc => 'pg_backend_pid' },
+{ oid => '3436', descr => 'statistics: current session ID',
+  proname => 'pg_session_id', provolatile => 's', proparallel => 'r',
+  prorettype => 'int4', proargtypes => '', prosrc => 'pg_session_id' },
 { oid => '1937', descr => 'statistics: PID of backend',
   proname => 'pg_stat_get_backend_pid', provolatile => 's', proparallel => 'r',
   prorettype => 'int4', proargtypes => 'int4',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..fdf1854 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(uint32 sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index ef5528c..bb6d359 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -66,6 +66,7 @@ typedef struct
 #include "datatype/timestamp.h"
 #include "libpq/hba.h"
 #include "libpq/pqcomm.h"
+#include "storage/latch.h"
 
 
 typedef enum CAC_state
@@ -139,6 +140,12 @@ typedef struct Port
 	List	   *guc_options;
 
 	/*
+	 * libpq communication state
+	 */
+	void			*pqcomm_state;
+	WaitEventSet	*pqcomm_waitset;
+
+	/*
 	 * Information that needs to be held during the authentication cycle.
 	 */
 	HbaLine    *hba;
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 36baf6b..10ba28b 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -60,7 +60,12 @@ extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
 extern void RemoveSocketFiles(void);
-extern void pq_init(void);
+extern void *pq_init(MemoryContext mcxt);
+extern void pq_reset(void);
+extern void pq_set_current_state(void *state, Port *port, WaitEventSet *set);
+extern WaitEventSet *pq_get_current_waitset(void);
+extern WaitEventSet *pq_create_backend_event_set(MemoryContext mcxt,
+												 Port *port, bool onlySock);
 extern int	pq_getbytes(char *s, size_t len);
 extern int	pq_getstring(StringInfo s);
 extern void pq_startmsgread(void);
@@ -71,6 +76,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
@@ -96,8 +102,6 @@ extern ssize_t secure_raw_write(Port *port, const void *ptr, size_t len);
 
 extern bool ssl_loaded_verify_locations;
 
-extern WaitEventSet *FeBeWaitSet;
-
 /* GUCs */
 extern char *SSLCipherSuites;
 extern char *SSLECDHCurve;
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index e167ee8..9652340 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -26,6 +26,7 @@
 #include <signal.h>
 
 #include "pgtime.h"				/* for pg_time_t */
+#include "utils/palloc.h"
 
 
 #define InvalidPid				(-1)
@@ -150,6 +151,9 @@ extern PGDLLIMPORT bool IsUnderPostmaster;
 extern PGDLLIMPORT bool IsBackgroundWorker;
 extern PGDLLIMPORT bool IsBinaryUpgrade;
 
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT char* DedicatedDatabases;
+
 extern PGDLLIMPORT bool ExitOnAnyError;
 
 extern PGDLLIMPORT char *DataDir;
@@ -158,10 +162,14 @@ extern PGDLLIMPORT int data_directory_mode;
 extern PGDLLIMPORT int NBuffers;
 extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int SessionPoolPorts;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
 extern PGDLLIMPORT int MyProcPid;
+extern PGDLLIMPORT uint32 MySessionId;
 extern PGDLLIMPORT pg_time_t MyStartTime;
 extern PGDLLIMPORT struct Port *MyProcPort;
 extern PGDLLIMPORT struct Latch *MyLatch;
@@ -335,6 +343,9 @@ extern void SwitchBackToLocalLatch(void);
 extern bool superuser(void);	/* current user is superuser */
 extern bool superuser_arg(Oid roleid);	/* given user is superuser */
 
+/* in utils/init/postinit.c */
+void process_settings(Oid databaseid, Oid roleid);
+
 
 /*****************************************************************************
  *	  pmod.h --																 *
@@ -425,6 +436,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname, bool override_allow_connections);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
@@ -445,6 +457,9 @@ extern void process_session_preload_libraries(void);
 extern void pg_bindtextdomain(const char *domain);
 extern bool has_rolreplication(Oid roleid);
 
+void *GetLocalUserIdStateCopy(MemoryContext mcxt);
+void SetCurrentUserIdState(void *userId);
+
 /* in access/transam/xlog.c */
 extern bool BackupInProgress(void);
 extern void CancelBackup(void);
diff --git a/src/include/port.h b/src/include/port.h
index 74a9dc4..ac53f3c 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index b398cd3..01971bc 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -447,6 +447,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -456,6 +457,7 @@ int			pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen);
 int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout);
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 const char *pgwin32_socket_strerror(int err);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
diff --git a/src/include/postmaster/connpool.h b/src/include/postmaster/connpool.h
new file mode 100644
index 0000000..45aa37c
--- /dev/null
+++ b/src/include/postmaster/connpool.h
@@ -0,0 +1,54 @@
+#ifndef CONN_POOL_H
+#define CONN_POOL_H
+
+#include "port.h"
+#include "libpq/libpq-be.h"
+
+#define MAX_CONNPOOL_WORKERS	100
+
+typedef enum
+{
+	CPW_FREE,
+	CPW_NEW_SOCKET,
+	CPW_PROCESSED
+} ConnPoolWorkerState;
+
+enum CAC_STATE;
+
+typedef struct ConnPoolWorker
+{
+	Port	   *port;		/* port in the pool */
+	int			pipes[2];	/* 0 for sending, 1 for receiving */
+
+	/* the communication procedure:
+	 * ) find a worker with state == CPW_FREE
+	 * ) assign client socket
+	 * ) add pipe to wait set (if it's not there)
+	 * ) wake up the worker.
+	 * ) process data from the worker until state != CPW_PROCESSED
+	 * ) set state to CPW_FREE
+	 * ) fork or send socket and the data to backend.
+	 *
+	 * bgworker
+	 * ) wokes up
+	 * ) check the state
+	 * ) if stats is CPW_NEW_SOCKET gets data from clientsock and
+	 * send the data through pipe to postmaster.
+	 * ) set state to CPW_PROCESSED.
+	 */
+	volatile ConnPoolWorkerState	state;
+	volatile CAC_state				cac_state;
+	pid_t							pid;
+	Latch						   *latch;
+} ConnPoolWorker;
+
+extern Size ConnPoolShmemSize(void);
+extern void ConnectionPoolWorkersInit(void);
+extern void RegisterConnPoolWorkers(void);
+extern void StartupPacketReaderMain(Datum arg);
+
+/* global variables */
+extern int NumConnPoolWorkers;
+extern ConnPoolWorker *ConnPoolWorkers;
+
+#endif
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..1f16836 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,10 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone,
+						MemoryContext memctx, int errlevel);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/ipc.h b/src/include/storage/ipc.h
index 6a05a89..9cddaf9 100644
--- a/src/include/storage/ipc.h
+++ b/src/include/storage/ipc.h
@@ -72,6 +72,7 @@ extern void on_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void before_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void cancel_before_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void on_exit_reset(void);
+extern void on_shmem_exit_reset(void);
 
 /* ipci.c */
 extern PGDLLIMPORT shmem_startup_hook_type shmem_startup_hook;
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index fd8735b..c7dd708 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, pgsocket fd);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index cb613c8..f3c1079 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -21,6 +21,7 @@
 #include "storage/lock.h"
 #include "storage/pg_sema.h"
 #include "storage/proclist_types.h"
+#include "utils/guc_tables.h"
 
 /*
  * Each backend advertises up to PGPROC_MAX_CACHED_SUBXIDS TransactionIds
@@ -276,6 +277,57 @@ extern PGDLLIMPORT PROC_HDR *ProcGlobal;
 
 extern PGPROC *PreparedXactProcs;
 
+typedef struct SessionGUC
+{
+	struct SessionGUC	   *next;
+	config_var_value		saved;
+	struct config_generic  *var;
+} SessionGUC;
+
+/*
+ * Information associated with client session.
+ */
+typedef struct SessionContext
+{
+	uint32          magic;              /* Magic to validate content of session object */
+	uint32			id;					/* session identifier, unique across many backends */
+	/* Memory context used for global session data (instead of TopMemoryContext) */
+	MemoryContext	memory;
+	struct Port*	port;				/* connection port */
+	Oid				tempNamespace;		/* temporary namespace */
+	Oid				tempToastNamespace;	/* temporary toast namespace */
+	SessionGUC	   *gucs;				/* session local GUCs */
+	WaitEventSet   *eventSet;			/* Wait set for the session */
+	HTAB		   *prepared_queries;	/* Session prepared queries */
+	HTAB		   *portals;			/* Session portals */
+	void		   *userId;				/* Current role state */
+	#define SessionVariable(type,name,init)  type name;
+	#include "storage/sessionvars.h"
+} SessionContext;
+
+#define SessionVariable(type,name,init)  extern type name;
+#include "storage/sessionvars.h"
+
+typedef struct Port Port;
+typedef struct BackendSessionPool
+{
+	MemoryContext	mcxt;
+
+	WaitEventSet   *waitEvents;		/* Set of all sessions sockets */
+	uint32			sessionCount;   /* Number of sessions */
+
+	/*
+	 * Reference to the original port of this backend created when this backend
+	 * was launched. Session using this port may be already terminated,
+	 * but since it is allocated in TopMemoryContext, its content is still
+	 * valid and is used as template for ports of new sessions
+	 */
+	Port		   *backendPort;
+} BackendSessionPool;
+
+extern PGDLLIMPORT SessionContext		*ActiveSession;
+extern PGDLLIMPORT BackendSessionPool	*SessionPool;
+
 /* Accessor for PGPROC given a pgprocno. */
 #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])
 
@@ -295,7 +347,7 @@ extern int	StatementTimeout;
 extern int	LockTimeout;
 extern int	IdleInTransactionSessionTimeout;
 extern bool log_lock_waits;
-
+extern bool IsDedicatedBackend;
 
 /*
  * Function Prototypes
@@ -321,6 +373,7 @@ extern void ProcLockWakeup(LockMethod lockMethodTable, LOCK *lock);
 extern void CheckDeadLockAlert(void);
 extern bool IsWaitingForLock(void);
 extern void LockErrorCleanup(void);
+extern uint32 CreateSessionId(void);
 
 extern void ProcWaitForSignal(uint32 wait_event_info);
 extern void ProcSendSignal(int pid);
diff --git a/src/include/storage/sessionvars.h b/src/include/storage/sessionvars.h
new file mode 100644
index 0000000..690c56f
--- /dev/null
+++ b/src/include/storage/sessionvars.h
@@ -0,0 +1,13 @@
+/* SessionVariable(type,name,init) */
+SessionVariable(Oid, AuthenticatedUserId, InvalidOid)
+SessionVariable(Oid, SessionUserId, InvalidOid)
+SessionVariable(Oid, OuterUserId, InvalidOid)
+SessionVariable(Oid, CurrentUserId, InvalidOid)
+SessionVariable(bool, AuthenticatedUserIsSuperuser, false)
+SessionVariable(bool, SessionUserIsSuperuser, false)
+SessionVariable(int, SecurityRestrictionContext, 0)
+SessionVariable(bool, SetRoleIsActive, false)
+SessionVariable(Oid, last_roleid, InvalidOid)
+SessionVariable(bool, last_roleid_is_super, false)
+SessionVariable(struct SeqTableData*, last_used_seq, NULL)
+#undef SessionVariable
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..51d130c 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -31,9 +31,11 @@
 #define STACK_DEPTH_SLOP (512 * 1024L)
 
 extern CommandDest whereToSendOutput;
+
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index f462eab..338f0ec 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -395,6 +395,12 @@ extern Size EstimateGUCStateSpace(void);
 extern void SerializeGUCState(Size maxsize, char *start_address);
 extern void RestoreGUCState(void *gucstate);
 
+/* Session polling support function */
+struct SessionContext;
+extern void RestoreSessionGUCs(struct SessionContext* session);
+extern void ReleaseSessionGUCs(struct SessionContext* session);
+
+
 /* Support for messages reported from GUC check hooks */
 
 extern PGDLLIMPORT char *GUC_check_errmsg_string;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 668d9ef..e3f2e5a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h
index e4929b9..69ac10d 100644
--- a/src/include/utils/portal.h
+++ b/src/include/utils/portal.h
@@ -202,6 +202,7 @@ typedef struct PortalData
 
 
 /* Prototypes for functions in utils/mmgr/portalmem.c */
+HTAB *CreatePortalsHashTable(MemoryContext mcxt);
 extern void EnablePortalManager(void);
 extern bool PreCommit_Portals(bool isPrepare);
 extern void AtAbort_Portals(void);
#130Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#129)
1 attachment(s)
Re: Built-in connection pooling

I continue work on built-in connection pooler.
I have implemented three strategies for splitting sessions between
session pool workers:
- random
- round-robin
- load balancing (choose server with minimal wait queue size)

It is still not fixing the main drawback of the current implementation
of built-in pooler: long transaction or query can block all other sessions
scheduled to this backend. To prevent such situation we have to somehow
migrate session to some other (idle) backends.
Unfortunately session should take with it a lot of "luggage": serialized
GUCs, prepared statements and, worst of all, temporary tables.
If first two in principle can be handled, what to do with temporary
table is unclear.

Frankly speaking I think that implementation of temporary tables
in Postgres has to be rewritten in any case. Them are causing catalog
blow, can not be used in parallel queries,...
May be in case of such rewriting of temporary tables implementation them
can be better marries with built-on connection pooler.
But right now sessions can not be migrated.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-11.patchtext/x-patch; name=session_pool-11.patchDownload
diff --git a/contrib/test_decoding/sql/messages.sql b/contrib/test_decoding/sql/messages.sql
index cf3f773..14c4163 100644
--- a/contrib/test_decoding/sql/messages.sql
+++ b/contrib/test_decoding/sql/messages.sql
@@ -23,6 +23,8 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'for
 
 -- test db filtering
 \set prevdb :DBNAME
+show session_pool_size;
+show session_pool_ports;
 \c template1
 
 SELECT 'otherdb1' FROM pg_logical_emit_message(false, 'test', 'otherdb1');
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index bee4afb..061b67a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -703,6 +703,125 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one backend when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached,
+          the backend stops accepting connections. Until one of the
+          connections is terminated, attempts to connect to this
+          backend result in an error.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched backends are never terminated even if there are no active sessions.
+        </para>
+        <para>
+          The default value is zero, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-pool-workers" xreflabel="connection_pool_workers">
+      <term><varname>connection_pool_workers</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_pool_workers</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Number of connection listeners used to read client startup packages.
+          If session pooling is enabled, <productname>&productname;</productname>
+          server redirects all client startup packages to a connection listener.
+          The listener determines the database and user that the client needs
+          to access and redirects the connection to an appropriate backend,
+          which is selected from the pool using the round-robin algorithm.
+          This approach allows to avoid server slowdown if a client tries
+          to connect via a slow or unreliable network.
+        </para>
+        <para>
+          The default value is 2.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-dedicated-databases" xreflabel="dedicated_databases">
+      <term><varname>dedicated_databases</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>dedicated_databases</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies the list of databases for which session pooling is disabled.
+          For such databases, a separate backend is forked for each connection.
+          By default, session pooling is disabled for <literal>template0</literal>,
+          <literal>template1</literal>, and <literal>postgres</literal> databases.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to backend in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between session pool backends.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose backend in session pool.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose backend with lowest load average.
+          Load average of backend is estimated by number of ready events at each reschedule iteration.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 5d13e6a..5a93c7e 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -178,7 +178,6 @@ static List *overrideStack = NIL;
  * committed its creation, depending on whether myTempNamespace is valid.
  */
 static Oid	myTempNamespace = InvalidOid;
-
 static Oid	myTempToastNamespace = InvalidOid;
 
 static SubTransactionId myTempNamespaceSubID = InvalidSubTransactionId;
@@ -193,6 +192,7 @@ char	   *namespace_search_path = NULL;
 /* Local functions */
 static void recomputeNamespacePath(void);
 static void InitTempTableNamespace(void);
+static Oid  GetTempTableNamespace(void);
 static void RemoveTempRelations(Oid tempNamespaceId);
 static void RemoveTempRelationsCallback(int code, Datum arg);
 static void NamespaceCallback(Datum arg, int cacheid, uint32 hashvalue);
@@ -460,9 +460,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (strcmp(newRelation->schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(newRelation->schemaname, false);
@@ -471,9 +469,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 	else if (newRelation->relpersistence == RELPERSISTENCE_TEMP)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 	else
 	{
@@ -482,8 +478,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -2921,9 +2916,7 @@ LookupCreationNamespace(const char *nspname)
 	if (strcmp(nspname, "pg_temp") == 0)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 
 	namespaceId = get_namespace_oid(nspname, false);
@@ -2986,9 +2979,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (strcmp(schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(schemaname, false);
@@ -3001,8 +2992,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -3254,16 +3244,28 @@ int
 GetTempNamespaceBackendId(Oid namespaceId)
 {
 	int			result;
-	char	   *nspname;
+	char	   *nspname,
+			   *addlevel;
 
 	/* See if the namespace name starts with "pg_temp_" or "pg_toast_temp_" */
 	nspname = get_namespace_name(namespaceId);
 	if (!nspname)
 		return InvalidBackendId;	/* no such namespace? */
 	if (strncmp(nspname, "pg_temp_", 8) == 0)
-		result = atoi(nspname + 8);
+	{
+		/* check for session id */
+		if ((addlevel = strstr(nspname + 8, "_")) != NULL)
+			result = atoi(addlevel + 1);
+		else
+			result = atoi(nspname + 8);
+	}
 	else if (strncmp(nspname, "pg_toast_temp_", 14) == 0)
-		result = atoi(nspname + 14);
+	{
+		if ((addlevel = strstr(nspname + 14, "_")) != NULL)
+			result = atoi(addlevel + 1);
+		else
+			result = atoi(nspname + 14);
+	}
 	else
 		result = InvalidBackendId;
 	pfree(nspname);
@@ -3309,8 +3311,11 @@ void
 SetTempNamespaceState(Oid tempNamespaceId, Oid tempToastNamespaceId)
 {
 	/* Worker should not have created its own namespaces ... */
-	Assert(myTempNamespace == InvalidOid);
-	Assert(myTempToastNamespace == InvalidOid);
+	if (!ActiveSession)
+	{
+		Assert(myTempNamespace == InvalidOid);
+		Assert(myTempToastNamespace == InvalidOid);
+	}
 	Assert(myTempNamespaceSubID == InvalidSubTransactionId);
 
 	/* Assign same namespace OIDs that leader has */
@@ -3830,6 +3835,24 @@ recomputeNamespacePath(void)
 	list_free(oidlist);
 }
 
+static Oid
+GetTempTableNamespace(void)
+{
+	if (ActiveSession)
+	{
+		if (!OidIsValid(ActiveSession->tempNamespace))
+			InitTempTableNamespace();
+		else
+			myTempNamespace = ActiveSession->tempNamespace;
+	}
+	else
+	{
+		if (!OidIsValid(myTempNamespace))
+			InitTempTableNamespace();
+	}
+	return myTempNamespace;
+}
+
 /*
  * InitTempTableNamespace
  *		Initialize temp table namespace on first use in a particular backend
@@ -3841,8 +3864,6 @@ InitTempTableNamespace(void)
 	Oid			namespaceId;
 	Oid			toastspaceId;
 
-	Assert(!OidIsValid(myTempNamespace));
-
 	/*
 	 * First, do permission check to see if we are authorized to make temp
 	 * tables.  We use a nonstandard error message here since "databasename:
@@ -3881,7 +3902,12 @@ InitTempTableNamespace(void)
 				(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
 				 errmsg("cannot create temporary tables during a parallel operation")));
 
-	snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d_%u",
+					ActiveSession->id, MyBackendId);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d",
+					MyBackendId);
 
 	namespaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(namespaceId))
@@ -3913,8 +3939,12 @@ InitTempTableNamespace(void)
 	 * it. (We assume there is no need to clean it out if it does exist, since
 	 * dropping a parent table should make its toast table go away.)
 	 */
-	snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d",
-			 MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d_%u",
+					ActiveSession->id, MyBackendId);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%u",
+					MyBackendId);
 
 	toastspaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(toastspaceId))
@@ -3945,6 +3975,11 @@ InitTempTableNamespace(void)
 	 */
 	MyProc->tempNamespaceId = namespaceId;
 
+	if (ActiveSession)
+	{
+		ActiveSession->tempNamespace = namespaceId;
+		ActiveSession->tempToastNamespace = toastspaceId;
+	}
 	/* It should not be done already. */
 	AssertState(myTempNamespaceSubID == InvalidSubTransactionId);
 	myTempNamespaceSubID = GetCurrentSubTransactionId();
@@ -3974,6 +4009,11 @@ AtEOXact_Namespace(bool isCommit, bool parallel)
 		{
 			myTempNamespace = InvalidOid;
 			myTempToastNamespace = InvalidOid;
+			if (ActiveSession)
+			{
+				ActiveSession->tempNamespace = InvalidOid;
+			   	ActiveSession->tempToastNamespace = InvalidOid;
+  	  		}
 			baseSearchPathValid = false;	/* need to rebuild list */
 
 			/*
@@ -4121,13 +4161,16 @@ RemoveTempRelations(Oid tempNamespaceId)
 static void
 RemoveTempRelationsCallback(int code, Datum arg)
 {
-	if (OidIsValid(myTempNamespace))	/* should always be true */
+	Oid		tempNamespace = ActiveSession ?
+		ActiveSession->tempNamespace : myTempNamespace;
+
+	if (OidIsValid(tempNamespace))	/* should always be true */
 	{
 		/* Need to ensure we have a usable transaction. */
 		AbortOutOfAnyTransaction();
 		StartTransactionCommand();
 
-		RemoveTempRelations(myTempNamespace);
+		RemoveTempRelations(tempNamespace);
 
 		CommitTransactionCommand();
 	}
@@ -4137,10 +4180,19 @@ RemoveTempRelationsCallback(int code, Datum arg)
  * Remove all temp tables from the temporary namespace.
  */
 void
-ResetTempTableNamespace(void)
+ResetTempTableNamespace(Oid npc)
 {
-	if (OidIsValid(myTempNamespace))
-		RemoveTempRelations(myTempNamespace);
+	if (OidIsValid(npc))
+	{
+		AbortOutOfAnyTransaction();
+		StartTransactionCommand();
+		RemoveTempRelations(npc);
+		CommitTransactionCommand();
+	}
+	else
+		/* global */
+		if (OidIsValid(myTempNamespace))
+			RemoveTempRelations(myTempNamespace);
 }
 
 
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index e123691..23ff527 100644
--- a/src/backend/catalog/pg_db_role_setting.c
+++ b/src/backend/catalog/pg_db_role_setting.c
@@ -16,6 +16,7 @@
 #include "catalog/indexing.h"
 #include "catalog/objectaccess.h"
 #include "catalog/pg_db_role_setting.h"
+#include "storage/proc.h"
 #include "utils/fmgroids.h"
 #include "utils/rel.h"
 #include "utils/tqual.h"
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index 5df4382..f57a950 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -24,6 +24,7 @@
 #include "access/xlog.h"
 #include "access/xloginsert.h"
 #include "access/xlogutils.h"
+#include "catalog/namespace.h"
 #include "catalog/storage.h"
 #include "catalog/storage_xlog.h"
 #include "storage/freespace.h"
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 9bc67ce..3c90f8d 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2447,7 +2447,7 @@ CopyFrom(CopyState cstate)
 		 * registers the snapshot it uses.
 		 */
 		InvalidateCatalogSnapshot();
-		if (!ThereAreNoPriorRegisteredSnapshots() || !ThereAreNoReadyPortals())
+		if (!ThereAreNoPriorRegisteredSnapshots() || (SessionPoolSize == 0 && !ThereAreNoReadyPortals()))
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 					 errmsg("cannot perform FREEZE because of prior transaction activity")));
diff --git a/src/backend/commands/discard.c b/src/backend/commands/discard.c
index 01a999c..363a52a 100644
--- a/src/backend/commands/discard.c
+++ b/src/backend/commands/discard.c
@@ -45,7 +45,7 @@ DiscardCommand(DiscardStmt *stmt, bool isTopLevel)
 			break;
 
 		case DISCARD_TEMP:
-			ResetTempTableNamespace();
+			ResetTempTableNamespace(InvalidOid);
 			break;
 
 		default:
@@ -73,6 +73,6 @@ DiscardAll(bool isTopLevel)
 	Async_UnlistenAll();
 	LockReleaseAll(USER_LOCKMETHOD, true);
 	ResetPlanCache();
-	ResetTempTableNamespace();
+	ResetTempTableNamespace(InvalidOid);
 	ResetSequenceCaches();
 }
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..1696500 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,9 +30,11 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
+#include "utils/memutils.h"
 #include "utils/snapmgr.h"
 #include "utils/timestamp.h"
 
@@ -43,9 +45,7 @@
  * The keys for this hash table are the arguments to PREPARE and EXECUTE
  * (statement names); the entries are PreparedStatement structs.
  */
-static HTAB *prepared_queries = NULL;
-
-static void InitQueryHashTable(void);
+static HTAB *InitQueryHashTable(MemoryContext mcxt);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
 static Datum build_regtype_array(Oid *param_types, int num_params);
@@ -427,20 +427,43 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
 /*
  * Initialize query hash table upon first use.
  */
-static void
-InitQueryHashTable(void)
+static HTAB *
+InitQueryHashTable(MemoryContext mcxt)
 {
-	HASHCTL		hash_ctl;
+	HTAB		   *res;
+	MemoryContext	old_mcxt;
+	HASHCTL			hash_ctl;
 
 	MemSet(&hash_ctl, 0, sizeof(hash_ctl));
 
 	hash_ctl.keysize = NAMEDATALEN;
 	hash_ctl.entrysize = sizeof(PreparedStatement);
+	hash_ctl.hcxt = mcxt;
+
+	old_mcxt = MemoryContextSwitchTo(mcxt);
+	res = hash_create("Prepared Queries", 32, &hash_ctl, HASH_ELEM | HASH_CONTEXT);
+	MemoryContextSwitchTo(old_mcxt);
 
-	prepared_queries = hash_create("Prepared Queries",
-								   32,
-								   &hash_ctl,
-								   HASH_ELEM);
+	return res;
+}
+
+static HTAB *
+get_prepared_queries_htab(bool init)
+{
+	static HTAB *prepared_queries = NULL;
+
+	if (ActiveSession)
+	{
+		if (init && !ActiveSession->prepared_queries)
+			ActiveSession->prepared_queries = InitQueryHashTable(ActiveSession->memory);
+		return ActiveSession->prepared_queries;
+	}
+
+	/* Initialize the global hash table, if necessary */
+	if (init && !prepared_queries)
+		prepared_queries = InitQueryHashTable(TopMemoryContext);
+
+	return prepared_queries;
 }
 
 /*
@@ -458,12 +481,9 @@ StorePreparedStatement(const char *stmt_name,
 	TimestampTz cur_ts = GetCurrentStatementStartTimestamp();
 	bool		found;
 
-	/* Initialize the hash table, if necessary */
-	if (!prepared_queries)
-		InitQueryHashTable();
 
 	/* Add entry to hash table */
-	entry = (PreparedStatement *) hash_search(prepared_queries,
+	entry = (PreparedStatement *) hash_search(get_prepared_queries_htab(true),
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
@@ -495,13 +515,14 @@ PreparedStatement *
 FetchPreparedStatement(const char *stmt_name, bool throwError)
 {
 	PreparedStatement *entry;
+	HTAB			  *queries = get_prepared_queries_htab(false);
 
 	/*
 	 * If the hash table hasn't been initialized, it can't be storing
 	 * anything, therefore it couldn't possibly store our plan.
 	 */
-	if (prepared_queries)
-		entry = (PreparedStatement *) hash_search(prepared_queries,
+	if (queries)
+		entry = (PreparedStatement *) hash_search(queries,
 												  stmt_name,
 												  HASH_FIND,
 												  NULL);
@@ -579,7 +600,11 @@ DeallocateQuery(DeallocateStmt *stmt)
 void
 DropPreparedStatement(const char *stmt_name, bool showError)
 {
-	PreparedStatement *entry;
+	PreparedStatement	*entry;
+	HTAB				*queries = get_prepared_queries_htab(false);
+
+	if (!queries)
+		return;
 
 	/* Find the query's hash table entry; raise error if wanted */
 	entry = FetchPreparedStatement(stmt_name, showError);
@@ -590,7 +615,7 @@ DropPreparedStatement(const char *stmt_name, bool showError)
 		DropCachedPlan(entry->plansource);
 
 		/* Now we can remove the hash table entry */
-		hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		hash_search(queries, entry->stmt_name, HASH_REMOVE, NULL);
 	}
 }
 
@@ -602,20 +627,21 @@ DropAllPreparedStatements(void)
 {
 	HASH_SEQ_STATUS seq;
 	PreparedStatement *entry;
+	HTAB			  *queries = get_prepared_queries_htab(false);
 
 	/* nothing cached */
-	if (!prepared_queries)
+	if (!queries)
 		return;
 
 	/* walk over cache */
-	hash_seq_init(&seq, prepared_queries);
+	hash_seq_init(&seq, queries);
 	while ((entry = hash_seq_search(&seq)) != NULL)
 	{
 		/* Release the plancache entry */
 		DropCachedPlan(entry->plansource);
 
 		/* Now we can remove the hash table entry */
-		hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		hash_search(queries, entry->stmt_name, HASH_REMOVE, NULL);
 	}
 }
 
@@ -710,10 +736,11 @@ Datum
 pg_prepared_statement(PG_FUNCTION_ARGS)
 {
 	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
-	TupleDesc	tupdesc;
+	TupleDesc		tupdesc;
 	Tuplestorestate *tupstore;
-	MemoryContext per_query_ctx;
-	MemoryContext oldcontext;
+	MemoryContext	per_query_ctx;
+	MemoryContext	oldcontext;
+	HTAB		   *queries;
 
 	/* check to see if caller supports us returning a tuplestore */
 	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
@@ -757,13 +784,13 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
 	/* generate junk in short-term context */
 	MemoryContextSwitchTo(oldcontext);
 
-	/* hash table might be uninitialized */
-	if (prepared_queries)
+	queries = get_prepared_queries_htab(false);
+	if (queries)
 	{
 		HASH_SEQ_STATUS hash_seq;
 		PreparedStatement *prep_stmt;
 
-		hash_seq_init(&hash_seq, prepared_queries);
+		hash_seq_init(&hash_seq, queries);
 		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
 		{
 			Datum		values[5];
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 89122d4..7843d9d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -90,8 +90,6 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
  * last_used_seq is updated by nextval() to point to the last used
  * sequence.
  */
-static SeqTableData *last_used_seq = NULL;
-
 static void fill_seq_with_data(Relation rel, HeapTuple tuple);
 static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
diff --git a/src/backend/libpq/be-secure.c b/src/backend/libpq/be-secure.c
index d349d7c..3afacee 100644
--- a/src/backend/libpq/be-secure.c
+++ b/src/backend/libpq/be-secure.c
@@ -144,6 +144,7 @@ secure_read(Port *port, void *ptr, size_t len)
 {
 	ssize_t		n;
 	int			waitfor;
+	WaitEventSet	*waitset = pq_get_current_waitset();
 
 retry:
 #ifdef USE_SSL
@@ -166,9 +167,9 @@ retry:
 
 		Assert(waitfor);
 
-		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+		ModifyWaitEvent(waitset, 0, waitfor, NULL);
 
-		WaitEventSetWait(FeBeWaitSet, -1 /* no timeout */ , &event, 1,
+		WaitEventSetWait(waitset, -1 /* no timeout */ , &event, 1,
 						 WAIT_EVENT_CLIENT_READ);
 
 		/*
@@ -247,6 +248,7 @@ secure_write(Port *port, void *ptr, size_t len)
 {
 	ssize_t		n;
 	int			waitfor;
+	WaitEventSet	*waitset = pq_get_current_waitset();
 
 retry:
 	waitfor = 0;
@@ -268,9 +270,9 @@ retry:
 
 		Assert(waitfor);
 
-		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+		ModifyWaitEvent(waitset, 0, waitfor, NULL);
 
-		WaitEventSetWait(FeBeWaitSet, -1 /* no timeout */ , &event, 1,
+		WaitEventSetWait(waitset, -1 /* no timeout */ , &event, 1,
 						 WAIT_EVENT_CLIENT_WRITE);
 
 		/* See comments in secure_read. */
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..5e33c32 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -13,7 +13,7 @@
  * copy is aborted by an ereport(ERROR), we need to close out the copy so that
  * the frontend gets back into sync.  Therefore, these routines have to be
  * aware of COPY OUT state.  (New COPY-OUT is message-based and does *not*
- * set the DoingCopyOut flag.)
+ * set the is_doing_copyout flag.)
  *
  * NOTE: generally, it's a bad idea to emit outgoing messages directly with
  * pq_putbytes(), especially if the message would require multiple calls
@@ -87,12 +87,14 @@
 #ifdef _MSC_VER					/* mstcpip.h is missing on mingw */
 #include <mstcpip.h>
 #endif
+#include <execinfo.h>
 
 #include "common/ip.h"
 #include "libpq/libpq.h"
 #include "miscadmin.h"
 #include "port/pg_bswap.h"
 #include "storage/ipc.h"
+#include "storage/proc.h"
 #include "utils/guc.h"
 #include "utils/memutils.h"
 
@@ -134,23 +136,6 @@ static List *sock_paths = NIL;
 #define PQ_SEND_BUFFER_SIZE 8192
 #define PQ_RECV_BUFFER_SIZE 8192
 
-static char *PqSendBuffer;
-static int	PqSendBufferSize;	/* Size send buffer */
-static int	PqSendPointer;		/* Next index to store a byte in PqSendBuffer */
-static int	PqSendStart;		/* Next index to send a byte in PqSendBuffer */
-
-static char PqRecvBuffer[PQ_RECV_BUFFER_SIZE];
-static int	PqRecvPointer;		/* Next index to read a byte from PqRecvBuffer */
-static int	PqRecvLength;		/* End of data available in PqRecvBuffer */
-
-/*
- * Message status
- */
-static bool PqCommBusy;			/* busy sending data to the client */
-static bool PqCommReadingMsg;	/* in the middle of reading a message */
-static bool DoingCopyOut;		/* in old-protocol COPY OUT processing */
-
-
 /* Internal functions */
 static void socket_comm_reset(void);
 static void socket_close(int code, Datum arg);
@@ -181,28 +166,55 @@ static PQcommMethods PqCommSocketMethods = {
 	socket_endcopyout
 };
 
-PQcommMethods *PqCommMethods = &PqCommSocketMethods;
+/* These variables used to be global */
+struct PQcommState {
+	Port		   *port;
+	MemoryContext	mcxt;
 
-WaitEventSet *FeBeWaitSet;
+	/* Message status */
+	bool	is_busy;			/* busy sending data to the client */
+	bool	is_reading;			/* in the middle of reading a message */
+	bool	is_doing_copyout;	/* in old-protocol COPY OUT processing */
+	char   *send_buf;
 
+	int		send_bufsize;	/* Size send buffer */
+	int		send_offset;	/* Next index to store a byte in send_buf */
+	int		send_start;		/* Next index to send a byte in send_buf */
 
-/* --------------------------------
- *		pq_init - initialize libpq at backend startup
- * --------------------------------
+	char	recv_buf[PQ_RECV_BUFFER_SIZE];
+	int		recv_offset;	/* Next index to read a byte from pqstate->recv_buf */
+	int		recv_len;		/* End of data available in pqstate->recv_buf */
+
+	/* Wait events set */
+	WaitEventSet *wait_events;
+};
+
+static struct PQcommState *pqstate = NULL;
+PQcommMethods *PqCommMethods = &PqCommSocketMethods;
+
+/*
+ * Create common wait event for a backend
  */
-void
-pq_init(void)
+WaitEventSet *
+pq_create_backend_event_set(MemoryContext mcxt, Port *port,
+							bool onlySock)
 {
-	/* initialize state variables */
-	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
-	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
-	PqCommBusy = false;
-	PqCommReadingMsg = false;
-	DoingCopyOut = false;
+	WaitEventSet *result;
+	int				nevents = onlySock ? 1 : 3;
+
+	result = CreateWaitEventSet(mcxt, nevents);
+
+	AddWaitEventToSet(result, WL_SOCKET_WRITEABLE, port->sock,
+					  NULL, NULL);
+
+	if (!onlySock)
+	{
+		AddWaitEventToSet(result, WL_LATCH_SET, -1, MyLatch, NULL);
+		AddWaitEventToSet(result, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
+		/* set up process-exit hook to close the socket */
+		on_proc_exit(socket_close, 0);
+	}
 
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
@@ -215,16 +227,65 @@ pq_init(void)
 	 * infinite recursion.
 	 */
 #ifndef WIN32
-	if (!pg_set_noblock(MyProcPort->sock))
+	if (!pg_set_noblock(port->sock))
 		ereport(COMMERROR,
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
-	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
-	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
-					  NULL, NULL);
-	AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
-	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
+	return result;
+}
+
+/* --------------------------------
+ *		pq_init - initialize libpq at backend startup
+ * --------------------------------
+ */
+void *
+pq_init(MemoryContext mcxt)
+{
+	struct PQcommState *state =
+		MemoryContextAllocZero(mcxt, sizeof(struct PQcommState));
+
+	/* initialize state variables */
+	state->mcxt = mcxt;
+
+	state->send_bufsize = PQ_SEND_BUFFER_SIZE;
+	state->send_buf = MemoryContextAlloc(mcxt, state->send_bufsize);
+	state->send_offset = state->send_start = state->recv_offset = state->recv_len = 0;
+	state->is_busy = false;
+	state->is_reading = false;
+	state->is_doing_copyout = false;
+
+	state->wait_events = NULL;
+	return (void *) state;
+}
+
+void
+pq_set_current_state(void *state, Port *port, WaitEventSet *set)
+{
+	pqstate = (struct PQcommState *) state;
+
+	if (pqstate)
+	{
+		pq_reset();
+		pqstate->port = port;
+		pqstate->wait_events = set;
+	}
+}
+
+WaitEventSet *
+pq_get_current_waitset(void)
+{
+	return pqstate ? pqstate->wait_events : NULL;
+}
+
+void
+pq_reset(void)
+{
+	pqstate->send_offset = pqstate->send_start = 0;
+	pqstate->recv_offset = pqstate->recv_len = 0;
+	pqstate->is_busy = false;
+	pqstate->is_reading = false;
+	pqstate->is_doing_copyout = false;
 }
 
 /* --------------------------------
@@ -239,7 +300,7 @@ static void
 socket_comm_reset(void)
 {
 	/* Do not throw away pending data, but do reset the busy flag */
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	/* We can abort any old-style COPY OUT, too */
 	pq_endcopyout(true);
 }
@@ -255,8 +316,8 @@ socket_comm_reset(void)
 static void
 socket_close(int code, Datum arg)
 {
-	/* Nothing to do in a standalone backend, where MyProcPort is NULL. */
-	if (MyProcPort != NULL)
+	/* Nothing to do in a standalone backend, where pqstate->port is NULL. */
+	if (pqstate->port != NULL)
 	{
 #if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
 #ifdef ENABLE_GSS
@@ -267,11 +328,11 @@ socket_close(int code, Datum arg)
 		 * BackendInitialize(), because pg_GSS_recvauth() makes first use of
 		 * "ctx" and "cred".
 		 */
-		if (MyProcPort->gss->ctx != GSS_C_NO_CONTEXT)
-			gss_delete_sec_context(&min_s, &MyProcPort->gss->ctx, NULL);
+		if (pqstate->port->gss->ctx != GSS_C_NO_CONTEXT)
+			gss_delete_sec_context(&min_s, &pqstate->port->gss->ctx, NULL);
 
-		if (MyProcPort->gss->cred != GSS_C_NO_CREDENTIAL)
-			gss_release_cred(&min_s, &MyProcPort->gss->cred);
+		if (pqstate->port->gss->cred != GSS_C_NO_CREDENTIAL)
+			gss_release_cred(&min_s, &pqstate->port->gss->cred);
 #endif							/* ENABLE_GSS */
 
 		/*
@@ -279,14 +340,14 @@ socket_close(int code, Datum arg)
 		 * postmaster child free this, doing so is safe when interrupting
 		 * BackendInitialize().
 		 */
-		free(MyProcPort->gss);
+		free(pqstate->port->gss);
 #endif							/* ENABLE_GSS || ENABLE_SSPI */
 
 		/*
 		 * Cleanly shut down SSL layer.  Nowhere else does a postmaster child
 		 * call this, so this is safe when interrupting BackendInitialize().
 		 */
-		secure_close(MyProcPort);
+		secure_close(pqstate->port);
 
 		/*
 		 * Formerly we did an explicit close() here, but it seems better to
@@ -298,7 +359,7 @@ socket_close(int code, Datum arg)
 		 * We do set sock to PGINVALID_SOCKET to prevent any further I/O,
 		 * though.
 		 */
-		MyProcPort->sock = PGINVALID_SOCKET;
+		pqstate->port->sock = PGINVALID_SOCKET;
 	}
 }
 
@@ -921,12 +982,12 @@ RemoveSocketFiles(void)
 static void
 socket_set_nonblocking(bool nonblocking)
 {
-	if (MyProcPort == NULL)
+	if (pqstate->port == NULL)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),
 				 errmsg("there is no client connection")));
 
-	MyProcPort->noblock = nonblocking;
+	pqstate->port->noblock = nonblocking;
 }
 
 /* --------------------------------
@@ -938,30 +999,30 @@ socket_set_nonblocking(bool nonblocking)
 static int
 pq_recvbuf(void)
 {
-	if (PqRecvPointer > 0)
+	if (pqstate->recv_offset > 0)
 	{
-		if (PqRecvLength > PqRecvPointer)
+		if (pqstate->recv_len > pqstate->recv_offset)
 		{
 			/* still some unread data, left-justify it in the buffer */
-			memmove(PqRecvBuffer, PqRecvBuffer + PqRecvPointer,
-					PqRecvLength - PqRecvPointer);
-			PqRecvLength -= PqRecvPointer;
-			PqRecvPointer = 0;
+			memmove(pqstate->recv_buf, pqstate->recv_buf + pqstate->recv_offset,
+					pqstate->recv_len - pqstate->recv_offset);
+			pqstate->recv_len -= pqstate->recv_offset;
+			pqstate->recv_offset = 0;
 		}
 		else
-			PqRecvLength = PqRecvPointer = 0;
+			pqstate->recv_len = pqstate->recv_offset = 0;
 	}
 
 	/* Ensure that we're in blocking mode */
 	socket_set_nonblocking(false);
 
-	/* Can fill buffer from PqRecvLength and upwards */
+	/* Can fill buffer from pqstate->recv_len and upwards */
 	for (;;)
 	{
 		int			r;
 
-		r = secure_read(MyProcPort, PqRecvBuffer + PqRecvLength,
-						PQ_RECV_BUFFER_SIZE - PqRecvLength);
+		r = secure_read(pqstate->port, pqstate->recv_buf + pqstate->recv_len,
+						PQ_RECV_BUFFER_SIZE - pqstate->recv_len);
 
 		if (r < 0)
 		{
@@ -987,7 +1048,7 @@ pq_recvbuf(void)
 			return EOF;
 		}
 		/* r contains number of bytes read, so just incr length */
-		PqRecvLength += r;
+		pqstate->recv_len += r;
 		return 0;
 	}
 }
@@ -999,14 +1060,14 @@ pq_recvbuf(void)
 int
 pq_getbyte(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	while (PqRecvPointer >= PqRecvLength)
+	while (pqstate->recv_offset >= pqstate->recv_len)
 	{
 		if (pq_recvbuf())		/* If nothing in buffer, then recv some */
 			return EOF;			/* Failed to recv data */
 	}
-	return (unsigned char) PqRecvBuffer[PqRecvPointer++];
+	return (unsigned char) pqstate->recv_buf[pqstate->recv_offset++];
 }
 
 /* --------------------------------
@@ -1018,14 +1079,25 @@ pq_getbyte(void)
 int
 pq_peekbyte(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	while (PqRecvPointer >= PqRecvLength)
+	while (pqstate->recv_offset >= pqstate->recv_len)
 	{
 		if (pq_recvbuf())		/* If nothing in buffer, then recv some */
 			return EOF;			/* Failed to recv data */
 	}
-	return (unsigned char) PqRecvBuffer[PqRecvPointer];
+	return (unsigned char) pqstate->recv_buf[pqstate->recv_offset];
+}
+
+/* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return pqstate->recv_len - pqstate->recv_offset;
 }
 
 /* --------------------------------
@@ -1041,18 +1113,18 @@ pq_getbyte_if_available(unsigned char *c)
 {
 	int			r;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	if (PqRecvPointer < PqRecvLength)
+	if (pqstate->recv_offset < pqstate->recv_len)
 	{
-		*c = PqRecvBuffer[PqRecvPointer++];
+		*c = pqstate->recv_buf[pqstate->recv_offset++];
 		return 1;
 	}
 
 	/* Put the socket into non-blocking mode */
 	socket_set_nonblocking(true);
 
-	r = secure_read(MyProcPort, c, 1);
+	r = secure_read(pqstate->port, c, 1);
 	if (r < 0)
 	{
 		/*
@@ -1095,20 +1167,20 @@ pq_getbytes(char *s, size_t len)
 {
 	size_t		amount;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	while (len > 0)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
-		amount = PqRecvLength - PqRecvPointer;
+		amount = pqstate->recv_len - pqstate->recv_offset;
 		if (amount > len)
 			amount = len;
-		memcpy(s, PqRecvBuffer + PqRecvPointer, amount);
-		PqRecvPointer += amount;
+		memcpy(s, pqstate->recv_buf + pqstate->recv_offset, amount);
+		pqstate->recv_offset += amount;
 		s += amount;
 		len -= amount;
 	}
@@ -1129,19 +1201,19 @@ pq_discardbytes(size_t len)
 {
 	size_t		amount;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	while (len > 0)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
-		amount = PqRecvLength - PqRecvPointer;
+		amount = pqstate->recv_len - pqstate->recv_offset;
 		if (amount > len)
 			amount = len;
-		PqRecvPointer += amount;
+		pqstate->recv_offset += amount;
 		len -= amount;
 	}
 	return 0;
@@ -1167,35 +1239,35 @@ pq_getstring(StringInfo s)
 {
 	int			i;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	resetStringInfo(s);
 
 	/* Read until we get the terminating '\0' */
 	for (;;)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
 
-		for (i = PqRecvPointer; i < PqRecvLength; i++)
+		for (i = pqstate->recv_offset; i < pqstate->recv_len; i++)
 		{
-			if (PqRecvBuffer[i] == '\0')
+			if (pqstate->recv_buf[i] == '\0')
 			{
 				/* include the '\0' in the copy */
-				appendBinaryStringInfo(s, PqRecvBuffer + PqRecvPointer,
-									   i - PqRecvPointer + 1);
-				PqRecvPointer = i + 1;	/* advance past \0 */
+				appendBinaryStringInfo(s, pqstate->recv_buf + pqstate->recv_offset,
+									   i - pqstate->recv_offset + 1);
+				pqstate->recv_offset = i + 1;	/* advance past \0 */
 				return 0;
 			}
 		}
 
 		/* If we're here we haven't got the \0 in the buffer yet. */
-		appendBinaryStringInfo(s, PqRecvBuffer + PqRecvPointer,
-							   PqRecvLength - PqRecvPointer);
-		PqRecvPointer = PqRecvLength;
+		appendBinaryStringInfo(s, pqstate->recv_buf + pqstate->recv_offset,
+							   pqstate->recv_len - pqstate->recv_offset);
+		pqstate->recv_offset = pqstate->recv_len;
 	}
 }
 
@@ -1213,12 +1285,12 @@ pq_startmsgread(void)
 	 * There shouldn't be a read active already, but let's check just to be
 	 * sure.
 	 */
-	if (PqCommReadingMsg)
+	if (pqstate->is_reading)
 		ereport(FATAL,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
 				 errmsg("terminating connection because protocol synchronization was lost")));
 
-	PqCommReadingMsg = true;
+	pqstate->is_reading = true;
 }
 
 
@@ -1233,9 +1305,9 @@ pq_startmsgread(void)
 void
 pq_endmsgread(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	PqCommReadingMsg = false;
+	pqstate->is_reading = false;
 }
 
 /* --------------------------------
@@ -1249,7 +1321,7 @@ pq_endmsgread(void)
 bool
 pq_is_reading_msg(void)
 {
-	return PqCommReadingMsg;
+	return pqstate && pqstate->is_reading;
 }
 
 /* --------------------------------
@@ -1273,7 +1345,7 @@ pq_getmessage(StringInfo s, int maxlen)
 {
 	int32		len;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	resetStringInfo(s);
 
@@ -1318,7 +1390,7 @@ pq_getmessage(StringInfo s, int maxlen)
 						 errmsg("incomplete message from client")));
 
 			/* we discarded the rest of the message so we're back in sync. */
-			PqCommReadingMsg = false;
+			pqstate->is_reading = false;
 			PG_RE_THROW();
 		}
 		PG_END_TRY();
@@ -1337,7 +1409,7 @@ pq_getmessage(StringInfo s, int maxlen)
 	}
 
 	/* finished reading the message. */
-	PqCommReadingMsg = false;
+	pqstate->is_reading = false;
 
 	return 0;
 }
@@ -1355,13 +1427,13 @@ pq_putbytes(const char *s, size_t len)
 	int			res;
 
 	/* Should only be called by old-style COPY OUT */
-	Assert(DoingCopyOut);
+	Assert(pqstate->is_doing_copyout);
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	res = internal_putbytes(s, len);
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1373,23 +1445,24 @@ internal_putbytes(const char *s, size_t len)
 	while (len > 0)
 	{
 		/* If buffer is full, then flush it out */
-		if (PqSendPointer >= PqSendBufferSize)
+		if (pqstate->send_offset >= pqstate->send_bufsize)
 		{
 			socket_set_nonblocking(false);
 			if (internal_flush())
 				return EOF;
 		}
-		amount = PqSendBufferSize - PqSendPointer;
+		amount = pqstate->send_bufsize - pqstate->send_offset;
 		if (amount > len)
 			amount = len;
-		memcpy(PqSendBuffer + PqSendPointer, s, amount);
-		PqSendPointer += amount;
+		memcpy(pqstate->send_buf + pqstate->send_offset, s, amount);
+		pqstate->send_offset += amount;
 		s += amount;
 		len -= amount;
 	}
 	return 0;
 }
 
+
 /* --------------------------------
  *		socket_flush		- flush pending output
  *
@@ -1401,13 +1474,17 @@ socket_flush(void)
 {
 	int			res;
 
+	if (pqstate->port->sock == PGINVALID_SOCKET)
+		return 0;
+
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+
+	pqstate->is_busy = true;
 	socket_set_nonblocking(false);
 	res = internal_flush();
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1423,14 +1500,14 @@ internal_flush(void)
 {
 	static int	last_reported_send_errno = 0;
 
-	char	   *bufptr = PqSendBuffer + PqSendStart;
-	char	   *bufend = PqSendBuffer + PqSendPointer;
+	char	   *bufptr = pqstate->send_buf + pqstate->send_start;
+	char	   *bufend = pqstate->send_buf + pqstate->send_offset;
 
 	while (bufptr < bufend)
 	{
 		int			r;
 
-		r = secure_write(MyProcPort, bufptr, bufend - bufptr);
+		r = secure_write(pqstate->port, bufptr, bufend - bufptr);
 
 		if (r <= 0)
 		{
@@ -1470,7 +1547,7 @@ internal_flush(void)
 			 * flag that'll cause the next CHECK_FOR_INTERRUPTS to terminate
 			 * the connection.
 			 */
-			PqSendStart = PqSendPointer = 0;
+			pqstate->send_start = pqstate->send_offset = 0;
 			ClientConnectionLost = 1;
 			InterruptPending = 1;
 			return EOF;
@@ -1478,10 +1555,10 @@ internal_flush(void)
 
 		last_reported_send_errno = 0;	/* reset after any successful send */
 		bufptr += r;
-		PqSendStart += r;
+		pqstate->send_start += r;
 	}
 
-	PqSendStart = PqSendPointer = 0;
+	pqstate->send_start = pqstate->send_offset = 0;
 	return 0;
 }
 
@@ -1496,20 +1573,23 @@ socket_flush_if_writable(void)
 {
 	int			res;
 
+	if (pqstate->port->sock == PGINVALID_SOCKET)
+		return 0;
+
 	/* Quick exit if nothing to do */
-	if (PqSendPointer == PqSendStart)
+	if (pqstate->send_offset == pqstate->send_start)
 		return 0;
 
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
 
 	/* Temporarily put the socket into non-blocking mode */
 	socket_set_nonblocking(true);
 
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	res = internal_flush();
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1520,7 +1600,7 @@ socket_flush_if_writable(void)
 static bool
 socket_is_send_pending(void)
 {
-	return (PqSendStart < PqSendPointer);
+	return (pqstate->send_start < pqstate->send_offset);
 }
 
 /* --------------------------------
@@ -1559,9 +1639,9 @@ socket_is_send_pending(void)
 static int
 socket_putmessage(char msgtype, const char *s, size_t len)
 {
-	if (DoingCopyOut || PqCommBusy)
+	if (pqstate->is_doing_copyout || pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	if (msgtype)
 		if (internal_putbytes(&msgtype, 1))
 			goto fail;
@@ -1575,11 +1655,11 @@ socket_putmessage(char msgtype, const char *s, size_t len)
 	}
 	if (internal_putbytes(s, len))
 		goto fail;
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return 0;
 
 fail:
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return EOF;
 }
 
@@ -1599,11 +1679,11 @@ socket_putmessage_noblock(char msgtype, const char *s, size_t len)
 	 * Ensure we have enough space in the output buffer for the message header
 	 * as well as the message itself.
 	 */
-	required = PqSendPointer + 1 + 4 + len;
-	if (required > PqSendBufferSize)
+	required = pqstate->send_offset + 1 + 4 + len;
+	if (required > pqstate->send_bufsize)
 	{
-		PqSendBuffer = repalloc(PqSendBuffer, required);
-		PqSendBufferSize = required;
+		pqstate->send_buf = repalloc(pqstate->send_buf, required);
+		pqstate->send_bufsize = required;
 	}
 	res = pq_putmessage(msgtype, s, len);
 	Assert(res == 0);			/* should not fail when the message fits in
@@ -1619,7 +1699,7 @@ socket_putmessage_noblock(char msgtype, const char *s, size_t len)
 static void
 socket_startcopyout(void)
 {
-	DoingCopyOut = true;
+	pqstate->is_doing_copyout = true;
 }
 
 /* --------------------------------
@@ -1635,12 +1715,12 @@ socket_startcopyout(void)
 static void
 socket_endcopyout(bool errorAbort)
 {
-	if (!DoingCopyOut)
+	if (!pqstate->is_doing_copyout)
 		return;
 	if (errorAbort)
 		pq_putbytes("\n\n\\.\n", 5);
 	/* in non-error case, copy.c will have emitted the terminator line */
-	DoingCopyOut = false;
+	pqstate->is_doing_copyout = false;
 }
 
 /*
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..b69cc78
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,158 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    if (sendmsg(chan, &msg, 0) < 0)
+		return PGINVALID_SOCKET;
+
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    if (recvmsg(chan, &msg, 0) < 0)
+		return PGINVALID_SOCKET;
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..7fd901f 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -726,3 +726,65 @@ pgwin32_socket_strerror(int err)
 	}
 	return wserrbuf;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..b0bd173 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o \
+	connpool.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index d2b695e..15b9eb5 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -21,6 +21,7 @@
 #include "port/atomics.h"
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/logicalworker.h"
 #include "storage/dsm.h"
@@ -129,7 +130,10 @@ static const struct
 	},
 	{
 		"ApplyWorkerMain", ApplyWorkerMain
-	}
+	},
+	{
+		"StartupPacketReaderMain", StartupPacketReaderMain
+ 	}
 };
 
 /* Private functions. */
diff --git a/src/backend/postmaster/connpool.c b/src/backend/postmaster/connpool.c
new file mode 100644
index 0000000..1a25055
--- /dev/null
+++ b/src/backend/postmaster/connpool.c
@@ -0,0 +1,276 @@
+/*-------------------------------------------------------------------------
+ * connpool.c
+ *	   PostgreSQL connection pool workers.
+ *
+ * Copyright (c) 2018, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *    src/backend/postmaster/connpool.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <signal.h>
+#include <unistd.h>
+
+#include "lib/stringinfo.h"
+#include "libpq/libpq.h"
+#include "libpq/pqformat.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/bgworker.h"
+#include "postmaster/connpool.h"
+#include "postmaster/postmaster.h"
+#include "storage/proc.h"
+#include "utils/memutils.h"
+#include "utils/resowner.h"
+#include "tcop/tcopprot.h"
+
+/*
+ * GUC parameters
+ */
+int			NumConnPoolWorkers = 2;
+
+/*
+ * Global variables
+ */
+ConnPoolWorker	*ConnPoolWorkers;
+
+/*
+ * Signals management
+ */
+static volatile sig_atomic_t shutdown_requested = false;
+static void handle_sigterm(SIGNAL_ARGS);
+
+static void *pqstate;
+
+static void
+handle_sigterm(SIGNAL_ARGS)
+{
+	int save_errno = errno;
+	shutdown_requested = true;
+	SetLatch(&MyProc->procLatch);
+	errno = save_errno;
+}
+
+Size
+ConnPoolShmemSize(void)
+{
+	return MAXALIGN(sizeof(ConnPoolWorker) * NumConnPoolWorkers);
+}
+
+void
+ConnectionPoolWorkersInit(void)
+{
+	int		i;
+	bool	found;
+	Size	size = ConnPoolShmemSize();
+
+	ConnPoolWorkers = ShmemInitStruct("connection pool workers",
+			size, &found);
+
+	if (!found)
+	{
+		MemSet(ConnPoolWorkers, 0, size);
+		for (i = 0; i < NumConnPoolWorkers; i++)
+		{
+			ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+			if (socketpair(AF_UNIX, SOCK_STREAM, 0, worker->pipes) < 0)
+				elog(FATAL, "could not create socket pair for connection pool");
+		}
+	}
+}
+
+/*
+ * Register background workers for startup packet reading.
+ */
+void
+RegisterConnPoolWorkers(void)
+{
+	int					i;
+	BackgroundWorker	bgw;
+
+	if (SessionPoolSize == 0)
+		/* no need to start workers */
+		return;
+
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		memset(&bgw, 0, sizeof(bgw));
+		bgw.bgw_flags = BGWORKER_SHMEM_ACCESS;
+		bgw.bgw_start_time = BgWorkerStart_PostmasterStart;
+		snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
+		snprintf(bgw.bgw_function_name, BGW_MAXLEN, "StartupPacketReaderMain");
+		snprintf(bgw.bgw_name, BGW_MAXLEN,
+				 "connection pool worker %d", i + 1);
+		bgw.bgw_restart_time = 3;
+		bgw.bgw_notify_pid = 0;
+		bgw.bgw_main_arg = (Datum) i;
+
+		RegisterBackgroundWorker(&bgw);
+	}
+
+	elog(LOG, "Connection pool have been started");
+}
+
+static void
+resetWorkerState(ConnPoolWorker *worker, Port *port)
+{
+	/* Cleanup */
+	whereToSendOutput = DestNone;
+	if (port != NULL)
+	{
+		if (port->sock != PGINVALID_SOCKET)
+			closesocket(port->sock);
+		if (port->pqcomm_waitset != NULL)
+			FreeWaitEventSet(port->pqcomm_waitset);
+		port = NULL;
+	}
+	pq_set_current_state(pqstate, NULL, NULL);
+}
+
+void
+StartupPacketReaderMain(Datum arg)
+{
+	sigjmp_buf	local_sigjmp_buf;
+	ConnPoolWorker *worker = &ConnPoolWorkers[(int) arg];
+	MemoryContext	mcxt;
+	int				status;
+	Port		   *port = NULL;
+
+	pqsignal(SIGTERM, handle_sigterm);
+	BackgroundWorkerUnblockSignals();
+
+	mcxt = AllocSetContextCreate(TopMemoryContext,
+								 "temporary context",
+							     ALLOCSET_DEFAULT_SIZES);
+	pqstate = pq_init(TopMemoryContext);
+	worker->pid = MyProcPid;
+	worker->latch = MyLatch;
+	Assert(MyLatch == &MyProc->procLatch);
+
+	MemoryContextSwitchTo(mcxt);
+
+	/* In an exception is encountered, processing resumes here */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Since not using PG_TRY, must reset error stack by hand */
+		error_context_stack = NULL;
+
+		/* Prevent interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log and to the client */
+		EmitErrorReport();
+
+		/*
+		 * Now return to normal top-level context and clear ErrorContext for
+		 * next time.
+		 */
+		MemoryContextSwitchTo(mcxt);
+		FlushErrorState();
+
+		/*
+		 * We only reset worker state here, but memory will be cleaned
+		 * after next cycle. That's enough for now.
+		 */
+		resetWorkerState(worker, port);
+
+		/* Ready for new sockets */
+		worker->state = CPW_FREE;
+
+		/* Now we can allow interrupts again */
+		RESUME_INTERRUPTS();
+	}
+
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	while (!shutdown_requested)
+	{
+		ListCell	   *lc;
+		int				rc;
+		StringInfoData	buf;
+
+		rc = WaitLatch(&MyProc->procLatch,
+				WL_LATCH_SET | WL_POSTMASTER_DEATH,
+				0, PG_WAIT_EXTENSION);
+
+		if (rc & WL_POSTMASTER_DEATH)
+			break;
+
+		ResetLatch(&MyProc->procLatch);
+
+		if (shutdown_requested)
+			break;
+
+		if (worker->state != CPW_NEW_SOCKET)
+			/* we woke up for other reason */
+			continue;
+
+		/* Set up temporary pq state for startup packet */
+		port = palloc0(sizeof(Port));
+		port->sock = PGINVALID_SOCKET;
+
+		while (port->sock == PGINVALID_SOCKET)
+			port->sock = pg_recv_sock(worker->pipes[1]);
+
+		/* init pqcomm */
+		port->pqcomm_waitset = pq_create_backend_event_set(mcxt, port, true);
+		port->canAcceptConnections = worker->cac_state;
+		pq_set_current_state(pqstate, port, port->pqcomm_waitset);
+		whereToSendOutput = DestRemote;
+
+		/* TODO: deal with timeouts */
+		status = ProcessStartupPacket(port, false, mcxt, ERROR);
+		if (status != STATUS_OK)
+		{
+			worker->state = CPW_FREE;
+			goto cleanup;
+		}
+
+		/* Serialize a port into stringinfo */
+		pq_beginmessage(&buf, 'P');
+		pq_sendint(&buf, port->proto, 4);
+		pq_sendstring(&buf, port->database_name);
+		pq_sendstring(&buf, port->user_name);
+		pq_sendint(&buf, list_length(port->guc_options), 4);
+
+		foreach(lc, port->guc_options)
+		{
+			char *str = (char *) lfirst(lc);
+			pq_sendstring(&buf, str);
+		}
+
+		if (port->cmdline_options)
+		{
+			pq_sendint(&buf, 1, 4);
+			pq_sendstring(&buf, port->cmdline_options);
+		}
+		else pq_sendint(&buf, 0, 4);
+
+		worker->state = CPW_PROCESSED;
+
+		/* send size of data */
+		while ((rc = send(worker->pipes[1], &buf.len, sizeof(buf.len), 0)) < 0 && errno == EINTR);
+
+		if (rc != (int) sizeof(buf.len))
+			elog(ERROR, "could not send data to postmaster");
+
+		/* send the data */
+		while ((rc = send(worker->pipes[1], buf.data, buf.len, 0)) < 0 && errno == EINTR);
+
+		if (rc != buf.len)
+			elog(ERROR, "could not send data to postmaster");
+
+		pfree(buf.data);
+		buf.data = NULL;
+
+cleanup:
+		resetWorkerState(worker, port);
+		MemoryContextReset(mcxt);
+	}
+
+	resetWorkerState(worker, NULL);
+}
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 8a5b2b3..8bdc988 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -868,7 +868,8 @@ pgstat_report_stat(bool force)
 			PgStat_TableEntry *this_ent;
 
 			/* Shouldn't have any pending transaction-dependent counts */
-			Assert(entry->trans == NULL);
+			if (entry->trans != NULL)
+				continue;
 
 			/*
 			 * Ignore entries that didn't accumulate any actual counts, such
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index a4b53b3..85d6a18 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -76,6 +76,7 @@
 #include <sys/param.h>
 #include <netdb.h>
 #include <limits.h>
+#include <pthread.h>
 
 #ifdef HAVE_SYS_SELECT_H
 #include <sys/select.h>
@@ -114,6 +115,7 @@
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
 #include "postmaster/syslogger.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
 #include "storage/fd.h"
@@ -121,6 +123,7 @@
 #include "storage/pg_shmem.h"
 #include "storage/pmsignal.h"
 #include "storage/proc.h"
+#include "storage/procarray.h"
 #include "tcop/tcopprot.h"
 #include "utils/builtins.h"
 #include "utils/datetime.h"
@@ -150,6 +153,11 @@
 #define BACKEND_TYPE_WORKER		(BACKEND_TYPE_AUTOVAC | BACKEND_TYPE_BGWORKER)
 
 /*
+ * Load average for not backend which is not yet started (used in session schedule for connectoin pooling)
+ */
+#define INIT_BACKEND_LOAD_AVERAGE 10
+
+/*
  * List of active backends (or child processes anyway; we don't actually
  * know whether a given child has become a backend or is still in the
  * authorization phase).  This is used mainly to keep track of how many
@@ -170,6 +178,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -178,8 +187,13 @@ typedef struct bkend
 	 */
 	int			bkend_type;
 	bool		dead_end;		/* is it going to send an error and quit? */
-	bool		bgworker_notify;	/* gets bgworker start/stop notifications */
+	bool		bgworker_notify;/* gets bgworker start/stop notifications */
 	dlist_node	elem;			/* list link in BackendList */
+	int         session_pool_id;/* identifier of backends session pool */
+	int         worker_id;      /* identifier of worker within session pool */
+	void	   *pool;			/* pool of backends */
+	PGPROC     *proc;           /* PGPROC entry for this backend */
+	uint64      n_sessions;     /* number of session scheduled to this backend */
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
@@ -190,7 +204,27 @@ static Backend *ShmemBackendArray;
 
 BackgroundWorker *MyBgworkerEntry = NULL;
 
+struct DatabasePoolKey {
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+};
 
+typedef struct DatabasePool
+{
+	struct DatabasePoolKey key;
+
+	Backend	  **workers;	/* pool backends */
+	int			n_workers;	/* number of launched worker backends
+							   in this pool so far */
+	int			rr_index;	/* index of current backends used to implement
+							 * round-robin distribution of sessions through
+							 * backends */
+} DatabasePool;
+
+static struct
+{
+	HTAB			   *pools;
+} PostmasterSessionPool;
 
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
@@ -214,7 +248,7 @@ int			ReservedBackends;
 
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
-static pgsocket ListenSocket[MAXLISTEN];
+static pgsocket ListenSocket[MAXLISTEN + MAX_CONNPOOL_WORKERS];
 
 /*
  * Set by the -o option
@@ -393,15 +427,19 @@ static void unlink_external_pid_file(int status, Datum arg);
 static void getInstallationPaths(const char *argv0);
 static void checkControlFile(void);
 static Port *ConnCreate(int serverFd);
+static Port *PoolConnCreate(pgsocket poolFd, int workerId);
 static void ConnFree(Port *port);
+static void ConnDispatch(Port *port);
 static void reset_shared(int port);
 static void SIGHUP_handler(SIGNAL_ARGS);
+static CAC_state canAcceptConnections(void);
 static void pmdie(SIGNAL_ARGS);
 static void reaper(SIGNAL_ARGS);
 static void sigusr1_handler(SIGNAL_ARGS);
 static void startup_die(SIGNAL_ARGS);
 static void dummy_handler(SIGNAL_ARGS);
 static void StartupPacketTimeoutHandler(void);
+static int BackendStartup(DatabasePool *pool, Port *port);
 static void CleanupBackend(int pid, int exitstatus);
 static bool CleanupBackgroundWorker(int pid, int exitstatus);
 static void HandleChildCrash(int pid, int exitstatus, const char *procname);
@@ -412,13 +450,11 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
+static void report_postmaster_failure_to_client(Port *port, char const* errmsg);
 static void report_fork_failure_to_client(Port *port, int errnum);
-static CAC_state canAcceptConnections(void);
 static bool RandomCancelKey(int32 *cancel_key);
 static void signal_child(pid_t pid, int signal);
 static bool SignalSomeChildren(int signal, int targets);
@@ -486,6 +522,7 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket sessionsocket;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -988,6 +1025,11 @@ PostmasterMain(int argc, char *argv[])
 	ApplyLauncherRegister();
 
 	/*
+	 * Register connnection pool workers
+	 */
+	RegisterConnPoolWorkers();
+
+	/*
 	 * process any libraries that should be preloaded at postmaster start
 	 */
 	process_shared_preload_libraries();
@@ -1613,6 +1655,177 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+static bool
+IsDedicatedDatabase(char const* dbname)
+{
+	List       *namelist;
+	ListCell   *l;
+	char       *databases;
+	bool       found = false;
+
+    /* Need a modifiable copy of namespace_search_path string */
+	databases = pstrdup(DedicatedDatabases);
+
+	if (!SplitIdentifierString(databases, ',', &namelist)) {
+		elog(ERROR, "invalid list syntax");
+	}
+	foreach(l, namelist)
+	{
+		char *curname = (char *) lfirst(l);
+		if (strcmp(curname, dbname) == 0)
+		{
+			found = true;
+			break;
+		}
+	}
+	list_free(namelist);
+	pfree(databases);
+
+	return found;
+}
+
+/*
+ * Find free worker and send socket
+ */
+static void
+SendPortToConnectionPool(Port *port)
+{
+	int		i;
+	bool	sent;
+
+	/* By default is not dedicated */
+	IsDedicatedBackend = false;
+
+	sent = false;
+
+again:
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+		if (worker->pid == 0)
+			continue;
+
+		if (worker->state == CPW_PROCESSED)
+		{
+			Port *conn = PoolConnCreate(worker->pipes[0], i);
+			if (conn)
+				ConnDispatch(conn);
+		}
+		if (worker->state == CPW_FREE)
+		{
+			worker->port = port;
+			worker->state = CPW_NEW_SOCKET;
+			worker->cac_state = canAcceptConnections();
+
+			if (pg_send_sock(worker->pipes[0], port->sock, worker->pid) < 0)
+			{
+				elog(LOG, "could not send socket to connection pool: %m");
+				ExitPostmaster(1);
+			}
+			SetLatch(worker->latch);
+			sent = true;
+			break;
+		}
+	}
+
+	if (!sent)
+	{
+		pg_usleep(1000L);
+		goto again;
+	}
+}
+
+static void
+ConnDispatch(Port *port)
+{
+	bool			found;
+	DatabasePool   *pool;
+	struct DatabasePoolKey	key;
+
+	Assert(port->sock != PGINVALID_SOCKET);
+	if (IsDedicatedDatabase(port->database_name))
+	{
+		IsDedicatedBackend = true;
+		BackendStartup(NULL, port);
+		goto cleanup;
+	}
+
+#ifdef USE_SSL
+	if (port->ssl_in_use)
+	{
+		/*
+		 * We don't (yet) support SSL connections with connection pool,
+		 * since we need to move whole SSL context to already working
+		 * backend. This task needs more investigation.
+		 */
+		elog(ERROR, "connection pool does not support SSL connections");
+		goto cleanup;
+	}
+#endif
+	MemSet(key.database, 0, NAMEDATALEN);
+	MemSet(key.username, 0, NAMEDATALEN);
+
+	strlcpy(key.database, port->database_name, NAMEDATALEN);
+	strlcpy(key.username, port->user_name, NAMEDATALEN);
+
+	pool = hash_search(PostmasterSessionPool.pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		pool->key = key;
+		pool->workers = NULL;
+		pool->n_workers = 0;
+		pool->rr_index = 0;
+	}
+
+	BackendStartup(pool, port);
+
+cleanup:
+	/*
+	 * We no longer need the open socket or port structure
+	 * in this process
+	 */
+	StreamClose(port->sock);
+	ConnFree(port);
+}
+
+/*
+ * Init wait event set for connection pool workers,
+ * and hash table for backends in pool.
+ */
+static int
+InitConnPoolState(fd_set *rmask, int numSockets)
+{
+	int			i;
+	HASHCTL		ctl;
+
+	/*
+	 * create hashtable that indexes the relcache
+	 */
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(struct DatabasePoolKey);
+	ctl.entrysize = sizeof(DatabasePool);
+	ctl.hcxt = PostmasterContext;
+	PostmasterSessionPool.pools = hash_create("Pool by database and user", 100,
+								  &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+		worker->port = NULL;
+
+		/*
+		 * we use same pselect(3) call for connection pool workers and
+		 * clients
+		 */
+		ListenSocket[MAXLISTEN + i] = worker->pipes[0];
+		FD_SET(worker->pipes[0], rmask);
+		if (worker->pipes[0] > numSockets)
+			numSockets = worker->pipes[0];
+	}
+
+	return numSockets + 1;
+}
+
 /*
  * Main idle loop of postmaster
  *
@@ -1630,6 +1843,9 @@ ServerLoop(void)
 
 	nSockets = initMasks(&readmask);
 
+	if (SessionPoolSize > 0)
+		nSockets = InitConnPoolState(&readmask, nSockets);
+
 	for (;;)
 	{
 		fd_set		rmask;
@@ -1690,27 +1906,43 @@ ServerLoop(void)
 		 */
 		if (selres > 0)
 		{
+			Port	   *port;
 			int			i;
 
+			/* Check for client connections */
 			for (i = 0; i < MAXLISTEN; i++)
 			{
 				if (ListenSocket[i] == PGINVALID_SOCKET)
 					break;
 				if (FD_ISSET(ListenSocket[i], &rmask))
 				{
-					Port	   *port;
-
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
-						/*
-						 * We no longer need the open socket or port structure
-						 * in this process
-						 */
-						StreamClose(port->sock);
-						ConnFree(port);
+						if (SessionPoolSize == 0)
+						{
+							IsDedicatedBackend = true;
+							BackendStartup(NULL, port);
+							StreamClose(port->sock);
+							ConnFree(port);
+						}
+						else
+							SendPortToConnectionPool(port);
+					}
+				}
+			}
+
+			/* Check for some data from connections pool */
+			if (SessionPoolSize > 0)
+			{
+				for (i = 0; i < NumConnPoolWorkers; i++)
+				{
+					if (FD_ISSET(ListenSocket[MAXLISTEN + i], &rmask))
+					{
+						port = PoolConnCreate(ListenSocket[MAXLISTEN + i], i);
+						if (port)
+							ConnDispatch(port);
+
 					}
 				}
 			}
@@ -1893,13 +2125,15 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx,
+						int errlevel)
 {
 	int32		len;
 	void	   *buf;
 	ProtocolVersion proto;
-	MemoryContext oldcontext;
+	MemoryContext oldcontext = MemoryContextSwitchTo(memctx);
+	int			result;
 
 	pq_startmsgread();
 	if (pq_getbytes((char *) &len, 4) == EOF)
@@ -1992,7 +2226,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx, errlevel);
 	}
 
 	/* Could add additional special packet types here */
@@ -2006,13 +2240,16 @@ retry1:
 	/* Check that the major protocol version is in range. */
 	if (PG_PROTOCOL_MAJOR(proto) < PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST) ||
 		PG_PROTOCOL_MAJOR(proto) > PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST))
-		ereport(FATAL,
+	{
+		ereport(errlevel,
 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				 errmsg("unsupported frontend protocol %u.%u: server supports %u.0 to %u.%u",
 						PG_PROTOCOL_MAJOR(proto), PG_PROTOCOL_MINOR(proto),
 						PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST),
 						PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST),
 						PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST))));
+		return STATUS_ERROR;
+	}
 
 	/*
 	 * Now fetch parameters out of startup packet and save them into the Port
@@ -2022,7 +2259,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2070,12 +2307,15 @@ retry1:
 					am_db_walsender = true;
 				}
 				else if (!parse_bool(valptr, &am_walsender))
-					ereport(FATAL,
+				{
+					ereport(errlevel,
 							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 							 errmsg("invalid value for parameter \"%s\": \"%s\"",
 									"replication",
 									valptr),
 							 errhint("Valid values are: \"false\", 0, \"true\", 1, \"database\".")));
+					return STATUS_ERROR;
+				}
 			}
 			else if (strncmp(nameptr, "_pq_.", 5) == 0)
 			{
@@ -2103,9 +2343,12 @@ retry1:
 		 * given packet length, complain.
 		 */
 		if (offset != len - 1)
-			ereport(FATAL,
+		{
+			ereport(errlevel,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("invalid startup packet layout: expected terminator as last byte")));
+			return STATUS_ERROR;
+		}
 
 		/*
 		 * If the client requested a newer protocol version or if the client
@@ -2141,9 +2384,12 @@ retry1:
 
 	/* Check a user name was given. */
 	if (port->user_name == NULL || port->user_name[0] == '\0')
-		ereport(FATAL,
+	{
+		ereport(errlevel,
 				(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
 				 errmsg("no PostgreSQL user name specified in startup packet")));
+		return STATUS_ERROR;
+	}
 
 	/* The database defaults to the user name. */
 	if (port->database_name == NULL || port->database_name[0] == '\0')
@@ -2197,27 +2443,32 @@ retry1:
 	 * now instead of wasting cycles on an authentication exchange. (This also
 	 * allows a pg_ping utility to be written.)
 	 */
+	result = STATUS_OK;
 	switch (port->canAcceptConnections)
 	{
 		case CAC_STARTUP:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is starting up")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_SHUTDOWN:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is shutting down")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_RECOVERY:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is in recovery mode")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_TOOMANY:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
 					 errmsg("sorry, too many clients already")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_WAITBACKUP:
 			/* OK for now, will check in InitPostgres */
@@ -2226,7 +2477,7 @@ retry1:
 			break;
 	}
 
-	return STATUS_OK;
+	return result;
 }
 
 /*
@@ -2322,7 +2573,7 @@ processCancelRequest(Port *port, void *pkt)
 /*
  * canAcceptConnections --- check to see if database state allows connections.
  */
-static CAC_state
+CAC_state
 canAcceptConnections(void)
 {
 	CAC_state	result = CAC_OK;
@@ -2398,7 +2649,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -2418,6 +2669,69 @@ ConnCreate(int serverFd)
 	return port;
 }
 
+#define CONN_BUF_SIZE 8192
+
+static Port *
+PoolConnCreate(pgsocket poolFd, int workerId)
+{
+	char				recv_buf[CONN_BUF_SIZE];
+	int					recv_len = 0,
+						i,
+						rc,
+						offs,
+						len;
+	StringInfoData		buf;
+	ConnPoolWorker	   *worker = &ConnPoolWorkers[workerId];
+	Port			   *port = worker->port;
+
+	if (worker->state != CPW_PROCESSED)
+		return NULL;
+
+	/* In any case we should free the worker */
+	worker->port = NULL;
+	worker->state = CPW_FREE;
+
+	/* get size of data */
+	while ((rc = read(poolFd, &recv_len, sizeof recv_len)) < 0 && errno == EINTR);
+
+	if (rc != (int) sizeof(recv_len))
+		goto io_error;
+
+	/* get the data */
+	for (offs = 0; offs < recv_len; offs += rc)
+	{
+		while ((rc = read(poolFd, recv_buf + offs, CONN_BUF_SIZE - offs)) < 0 && errno == EINTR);
+		if (rc <= 0)
+			goto io_error;
+	}
+
+	buf.cursor = 0;
+	buf.data = recv_buf;
+	buf.len = recv_len;
+
+	port->proto = pq_getmsgint(&buf, 4);
+	port->database_name = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+	port->user_name = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+	port->guc_options = NIL;
+
+	/* GUC */
+	len = pq_getmsgint(&buf, 4);
+	for (i = 0; i < len; i++)
+	{
+		char	*val = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+		port->guc_options = lappend(port->guc_options, val);
+	}
+
+	if (pq_getmsgint(&buf, 4) > 0)
+		port->cmdline_options = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+
+	return port;
+
+io_error:
+	StreamClose(port->sock);
+	ConnFree(port);
+	return NULL;
+}
 
 /*
  * ConnFree -- free a local connection data structure
@@ -2430,6 +2744,12 @@ ConnFree(Port *conn)
 #endif
 	if (conn->gss)
 		free(conn->gss);
+	if (conn->database_name)
+		pfree(conn->database_name);
+	if (conn->user_name)
+		pfree(conn->user_name);
+	if (conn->cmdline_options)
+		pfree(conn->cmdline_options);
 	free(conn);
 }
 
@@ -3185,6 +3505,44 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory.
+ */
+static void
+UnlinkPooledBackend(Backend *bp)
+{
+	DatabasePool	*pool = bp->pool;
+
+	if (!pool ||
+		bp->bkend_type != BACKEND_TYPE_NORMAL ||
+		bp->session_send_sock == PGINVALID_SOCKET)
+		return;
+
+	Assert(pool->n_workers > bp->worker_id &&
+		   pool->workers[bp->worker_id] == bp);
+
+	if (--pool->n_workers != 0)
+	{
+		pool->workers[bp->worker_id] = pool->workers[pool->n_workers];
+		pool->workers[bp->worker_id]->worker_id = bp->worker_id;
+		pool->rr_index %= pool->n_workers;
+	}
+
+	closesocket(bp->session_send_sock);
+	bp->session_send_sock = PGINVALID_SOCKET;
+
+	elog(DEBUG2, "Cleanup backend %d", bp->pid);
+}
+
+static void
+DeleteBackend(Backend *bp)
+{
+	UnlinkPooledBackend(bp);
+
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3261,8 +3619,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			DeleteBackend(bp);
 			break;
 		}
 	}
@@ -3364,8 +3721,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			DeleteBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -3955,6 +4311,118 @@ TerminateChildren(int signal)
 }
 
 /*
+ * Try to report error to client.
+ * Since we do not care to risk blocking the postmaster on
+ * this connection, we set the connection to non-blocking and try only once.
+ *
+ * This is grungy special-purpose code; we cannot use backend libpq since
+ * it's not up and running.
+ */
+static void
+report_postmaster_failure_to_client(Port *port, char const* errmsg)
+{
+	int rc;
+
+	/* Set port to non-blocking.  Don't do send() if this fails */
+	if (!pg_set_noblock(port->sock))
+		return;
+
+	/* We'll retry after EINTR, but ignore all other failures */
+	do
+	{
+		rc = send(port->sock, errmsg, strlen(errmsg) + 1, 0);
+	} while (rc < 0 && errno == EINTR);
+
+	elog(DEBUG1, "Send postmaster failure to client: %d", rc);
+}
+
+typedef struct
+{
+	Backend* worker;
+	double   load_average;
+} WorkerState;
+
+static int
+compareWorkerLoadAverage(void const* p, void const* q)
+{
+	WorkerState* ws1 = (WorkerState*)p;
+	WorkerState* ws2 = (WorkerState*)q;
+	return ws1->load_average < ws2->load_average ? -1 : ws1->load_average == ws2->load_average ? 0 : 1;
+}
+
+static int
+ScheduleSession(DatabasePool *pool, Port *port)
+{
+	int i, j;
+	int n_workers = pool->n_workers;
+	WorkerState ws[MAX_CONNPOOL_WORKERS];
+
+	for (i = 0; i < n_workers; i++)
+	{
+		Backend *worker;
+		switch (SessionSchedule)
+		{
+		  case SESSION_SCHED_RANDOM:
+			worker = pool->workers[random() % n_workers];
+			break;
+		  case SESSION_SCHED_ROUND_ROBIN:
+			worker = pool->workers[pool->rr_index];
+			pool->rr_index = (pool->rr_index + 1) % n_workers; /* round-robin */
+			break;
+		  case SESSION_SCHED_LOAD_BALANCING:
+			if (i == 0)
+			{
+				for (j = 0; j < n_workers; j++)
+				{
+					worker = pool->workers[j];
+					if (!worker->proc)
+						worker->proc = BackendPidGetProc(worker->pid);
+					ws[j].worker = worker;
+					ws[j].load_average = (worker->proc && worker->proc->nSessionSchedules > 0)
+						? (double)worker->proc->nReadySessions / worker->proc->nSessionSchedules
+						: INIT_BACKEND_LOAD_AVERAGE;
+				}
+				qsort(ws, n_workers, sizeof(WorkerState), compareWorkerLoadAverage);
+			}
+			worker = ws[i].worker;
+			break;
+		  default:
+			Assert(false);
+		}
+		if (!worker->proc)
+			worker->proc = BackendPidGetProc(worker->pid);
+
+		if (worker->proc && worker->n_sessions - worker->proc->nFinishedSessions >= MaxSessions)
+		{
+			elog(LOG, "Worker %d reaches max session limit %d", worker->pid, MaxSessions);
+			continue;
+	    }
+		/* Send connection socket to the worker backend */
+		if (pg_send_sock(worker->session_send_sock, port->sock, worker->pid) < 0)
+		{
+			elog(LOG, "Failed to send session socket %d: %m",
+				 worker->session_send_sock);
+			UnlinkPooledBackend(worker);
+			n_workers -= 1;
+			i = -1; /* restart loop from very beginning */
+			continue;
+		}
+		worker->n_sessions += 1;
+		elog(DEBUG2, "Start new session for socket %d at backend %d",
+			 port->sock, worker->pid);
+
+		/* TODO: serialize the port and send it through socket */
+		return STATUS_OK;
+	}
+	ereport(LOG,
+			(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
+			 errmsg("sorry, too many open sessions for connection pool %s/%s",
+					pool->key.database, pool->key.username)));
+	report_postmaster_failure_to_client(port, "ESorry, too many open sessions\n");
+	return STATUS_ERROR;
+}
+
+/*
  * BackendStartup -- start backend process
  *
  * returns: STATUS_ERROR if the fork failed, STATUS_OK otherwise.
@@ -3962,16 +4430,24 @@ TerminateChildren(int signal)
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
 static int
-BackendStartup(Port *port)
+BackendStartup(DatabasePool *pool, Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	pgsocket    session_pipe[2];
+
+	/*
+	 * In case of session pooling instead of spawning new backend open
+	 * new session at one of the existed backends.
+	 */
+	if (pool && pool->n_workers >= SessionPoolSize)
+		return ScheduleSession(pool, port);
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
 	 * handle failure cleanly.
 	 */
-	bn = (Backend *) malloc(sizeof(Backend));
+	bn = (Backend *) calloc(1, sizeof(Backend));
 	if (!bn)
 	{
 		ereport(LOG,
@@ -3979,6 +4455,7 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
+	bn->n_sessions = 1;
 
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
@@ -4012,12 +4489,30 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (!IsDedicatedBackend)
+	{
+		if (socketpair(AF_UNIX, SOCK_STREAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+#ifdef WIN32
+		SessionPoolSock = session_pipe[0];
+#endif
+	}
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		whereToSendOutput = DestNone;
+
+		if (!IsDedicatedBackend)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4026,11 +4521,14 @@ BackendStartup(Port *port)
 		/* Close the postmaster's sockets */
 		ClosePostmasterPorts(false);
 
-		/* Perform additional initialization and collect startup packet */
+		/* Perform additional initialization */
 		BackendInitialize(port);
 
 		/* And run the backend */
 		BackendRun(port);
+
+		/* Unreachable */
+		Assert(false);
 	}
 #endif							/* EXEC_BACKEND */
 
@@ -4041,6 +4539,7 @@ BackendStartup(Port *port)
 
 		if (!bn->dead_end)
 			(void) ReleasePostmasterChildSlot(bn->child_slot);
+
 		free(bn);
 		errno = save_errno;
 		ereport(LOG,
@@ -4059,9 +4558,27 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
+	bn->pool = pool;
 	dlist_push_head(&BackendList, &bn->elem);
 
+	if (!IsDedicatedBackend)
+	{
+		/* Use this socket for sending client session socket descriptor */
+		bn->session_send_sock = session_pipe[1];
+
+		/* Close unused end of the pipe */
+		closesocket(session_pipe[0]);
+
+		if (pool->workers == NULL)
+			pool->workers = (Backend **) calloc(sizeof(Backend *), SessionPoolSize);
+
+		bn->worker_id = pool->n_workers++;
+		pool->workers[bn->worker_id] = bn;
+
+		elog(DEBUG1, "Start %d-th worker with pid %d", pool->n_workers, pid);
+	}
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4082,22 +4599,13 @@ static void
 report_fork_failure_to_client(Port *port, int errnum)
 {
 	char		buffer[1000];
-	int			rc;
 
 	/* Format the error message packet (always V2 protocol) */
 	snprintf(buffer, sizeof(buffer), "E%s%s\n",
 			 _("could not fork new process for connection: "),
 			 strerror(errnum));
 
-	/* Set port to non-blocking.  Don't do send() if this fails */
-	if (!pg_set_noblock(port->sock))
-		return;
-
-	/* We'll retry after EINTR, but ignore all other failures */
-	do
-	{
-		rc = send(port->sock, buffer, strlen(buffer) + 1, 0);
-	} while (rc < 0 && errno == EINTR);
+	report_postmaster_failure_to_client(port, buffer);
 }
 
 
@@ -4122,6 +4630,7 @@ BackendInitialize(Port *port)
 
 	/* Save port etc. for ps status */
 	MyProcPort = port;
+	FrontendProtocol = port->proto;
 
 	/*
 	 * PreAuthDelay is a debugging aid for investigating problems in the
@@ -4148,7 +4657,10 @@ BackendInitialize(Port *port)
 	 * Initialize libpq and enable reporting of ereport errors to the client.
 	 * Must do this now because authentication uses libpq to send messages.
 	 */
-	pq_init();					/* initialize libpq to talk to client */
+	port->pqcomm_state = pq_init(TopMemoryContext);   /* initialize libpq to talk to client */
+	port->pqcomm_waitset = pq_create_backend_event_set(TopMemoryContext, port, false);
+	pq_set_current_state(port->pqcomm_state, port, port->pqcomm_waitset);
+
 	whereToSendOutput = DestRemote; /* now safe to ereport to client */
 
 	/*
@@ -4227,35 +4739,46 @@ BackendInitialize(Port *port)
 		port->remote_hostname = strdup(remote_host);
 
 	/*
-	 * Ready to begin client interaction.  We will give up and exit(1) after a
-	 * time delay, so that a broken client can't hog a connection
-	 * indefinitely.  PreAuthDelay and any DNS interactions above don't count
-	 * against the time limit.
-	 *
-	 * Note: AuthenticationTimeout is applied here while waiting for the
-	 * startup packet, and then again in InitPostgres for the duration of any
-	 * authentication operations.  So a hostile client could tie up the
-	 * process for nearly twice AuthenticationTimeout before we kick him off.
-	 *
-	 * Note: because PostgresMain will call InitializeTimeouts again, the
-	 * registration of STARTUP_PACKET_TIMEOUT will be lost.  This is okay
-	 * since we never use it again after this function.
+	 * Read startup backend only if we don't use session pool
 	 */
-	RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
-	enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
+	if (IsDedicatedBackend && !port->proto)
+	{
+		/*
+		 * Ready to begin client interaction.  We will give up and exit(1) after a
+		 * time delay, so that a broken client can't hog a connection
+		 * indefinitely.  PreAuthDelay and any DNS interactions above don't count
+		 * against the time limit.
+		 *
+		 * Note: AuthenticationTimeout is applied here while waiting for the
+		 * startup packet, and then again in InitPostgres for the duration of any
+		 * authentication operations.  So a hostile client could tie up the
+		 * process for nearly twice AuthenticationTimeout before we kick him off.
+		 *
+		 * Note: because PostgresMain will call InitializeTimeouts again, the
+		 * registration of STARTUP_PACKET_TIMEOUT will be lost.  This is okay
+		 * since we never use it again after this function.
+		 */
+		RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
+		enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
 
-	/*
-	 * Receive the startup packet (which might turn out to be a cancel request
-	 * packet).
-	 */
-	status = ProcessStartupPacket(port, false);
+		/*
+		 * Receive the startup packet (which might turn out to be a cancel request
+		 * packet).
+		 */
+		status = ProcessStartupPacket(port, false, TopMemoryContext, FATAL);
 
-	/*
-	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
-	 * already did any appropriate error reporting.
-	 */
-	if (status != STATUS_OK)
-		proc_exit(0);
+		/*
+		 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
+		 * already did any appropriate error reporting.
+		 */
+		if (status != STATUS_OK)
+			proc_exit(0);
+
+		/*
+		 * Disable the timeout
+		 */
+		disable_timeout(STARTUP_PACKET_TIMEOUT, false);
+	}
 
 	/*
 	 * Now that we have the user and database name, we can set the process
@@ -4277,9 +4800,8 @@ BackendInitialize(Port *port)
 						update_process_title ? "authentication" : "");
 
 	/*
-	 * Disable the timeout, and prevent SIGTERM/SIGQUIT again.
+	 * Prevent SIGTERM/SIGQUIT again.
 	 */
-	disable_timeout(STARTUP_PACKET_TIMEOUT, false);
 	PG_SETMASK(&BlockSig);
 }
 
@@ -5990,6 +6512,9 @@ save_backend_variables(BackendParameters *param, Port *port,
 	if (!write_inheritable_socket(&param->portsocket, port->sock, childPid))
 		return false;
 
+	if (!write_inheritable_socket(&param->sessionsocket, SessionPoolSock, childPid))
+		return false;
+
 	strlcpy(param->DataDir, DataDir, MAXPGPATH);
 
 	memcpy(&param->ListenSocket, &ListenSocket, sizeof(ListenSocket));
@@ -6222,6 +6747,7 @@ restore_backend_variables(BackendParameters *param, Port *port)
 {
 	memcpy(port, &param->port, sizeof(Port));
 	read_inheritable_socket(&port->sock, &param->portsocket);
+	read_inheritable_socket(&SessionPoolSock, &param->sessionsocket);
 
 	SetDataDir(param->DataDir);
 
diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c
index a85a1c6..9802ca0 100644
--- a/src/backend/storage/ipc/ipc.c
+++ b/src/backend/storage/ipc/ipc.c
@@ -413,3 +413,12 @@ on_exit_reset(void)
 	on_proc_exit_index = 0;
 	reset_on_dsm_detach();
 }
+
+void
+on_shmem_exit_reset(void)
+{
+	before_shmem_exit_index = 0;
+	on_shmem_exit_index = 0;
+	on_proc_exit_index = 0;
+	reset_on_dsm_detach();
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 0c86a58..10e4613 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 		size = add_size(size, SyncScanShmemSize());
 		size = add_size(size, AsyncShmemSize());
 		size = add_size(size, BackendRandomShmemSize());
+		size = add_size(size, ConnPoolShmemSize());
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
@@ -271,6 +273,11 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 	AsyncShmemInit();
 	BackendRandomShmemInit();
 
+	/*
+	 * Set up connection pool workers
+	 */
+	ConnectionPoolWorkersInit();
+
 #ifdef EXEC_BACKEND
 
 	/*
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index f6dda9c..3c2a126 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,9 +130,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,9 +669,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (latch)
 	{
@@ -690,8 +694,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,15 +733,30 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -737,7 +767,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
@@ -774,9 +804,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -822,19 +852,37 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -865,9 +913,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -880,7 +944,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -897,8 +961,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1296,7 +1360,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 6f9aaa5..f80861a 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -367,6 +367,9 @@ InitProcess(void)
 	MyPgXact->xid = InvalidTransactionId;
 	MyPgXact->xmin = InvalidTransactionId;
 	MyProc->pid = MyProcPid;
+	MyProc->nReadySessions = 0;
+	MyProc->nSessionSchedules = 0;
+	MyProc->nFinishedSessions = 0;
 	/* backendId, databaseId and roleId will be filled in later */
 	MyProc->backendId = InvalidBackendId;
 	MyProc->databaseId = InvalidOid;
@@ -597,6 +600,15 @@ InitAuxiliaryProcess(void)
 }
 
 /*
+ * Generate unique session ID.
+ */
+uint32
+CreateSessionId(void)
+{
+	return ++SessionPool->sessionCount;
+}
+
+/*
  * Record the PID and PGPROC structures for the Startup process, for use in
  * ProcSendSignal().  See comments there for further explanation.
  */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 7a9ada2..8ebf902 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -40,6 +40,7 @@
 #include "access/printtup.h"
 #include "access/xact.h"
 #include "catalog/pg_type.h"
+#include "catalog/namespace.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
@@ -65,6 +66,7 @@
 #include "storage/bufmgr.h"
 #include "storage/ipc.h"
 #include "storage/proc.h"
+#include "storage/procarray.h"
 #include "storage/procsignal.h"
 #include "storage/sinval.h"
 #include "tcop/fastpath.h"
@@ -77,9 +79,12 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
+#include "utils/varlena.h"
+#include "utils/inval.h"
+#include "utils/catcache.h"
 #include "mb/pg_wchar.h"
 
-
 /* ----------------
  *		global variables
  * ----------------
@@ -100,6 +105,41 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */
+pgsocket SessionPoolSock = PGINVALID_SOCKET;
+
+/* Pointer to pool of sessions */
+BackendSessionPool	   *SessionPool = NULL;
+
+/* Pointer to the active session */
+SessionContext		   *ActiveSession;
+SessionContext		    DefaultContext;
+bool					IsDedicatedBackend = false;
+
+#define SessionVariable(type,name,init)  type name = init;
+#include "storage/sessionvars.h"
+
+static void SaveSessionVariables(SessionContext* session)
+{
+	if (session != NULL)
+	{
+#define SessionVariable(type,name,init) session->name = name;
+#include "storage/sessionvars.h"
+	}
+}
+
+static void LoadSessionVariables(SessionContext* session)
+{
+#define SessionVariable(type,name,init) name = session->name;
+#include "storage/sessionvars.h"
+}
+
+static void InitializeSessionVariables(SessionContext* session)
+{
+#define SessionVariable(type,name,init) session->name = DefaultContext.name;
+#include "storage/sessionvars.h"
+}
+
 
 
 /* ----------------
@@ -171,6 +211,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static bool IdleInTransactionSessionError;
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -196,6 +238,8 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+static void DeleteSession(SessionContext *session);
+static void ResetCurrentSession(void);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1234,10 +1278,6 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
-	/*
-	 * Report query to various monitoring facilities.
-	 */
-	debug_query_string = query_string;
 
 	pgstat_report_activity(STATE_RUNNING, query_string);
 
@@ -2930,9 +2970,28 @@ ProcessInterrupts(void)
 		LockErrorCleanup();
 		/* don't send to client, we already know the connection to be dead. */
 		whereToSendOutput = DestNone;
-		ereport(FATAL,
-				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("connection to client lost")));
+
+		if (ActiveSession)
+		{
+			Port *port = ActiveSession->port;
+			pgsocket sock = port->sock;
+			elog(LOG, "Lost connection on session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+			port->sock = PGINVALID_SOCKET;
+
+			MyProcPort = NULL;
+
+			StartTransactionCommand();
+			UserAbortTransactionBlock();
+			CommitTransactionCommand();
+
+			ResetCurrentSession();
+			closesocket(sock);
+		}
+		else
+			ereport(FATAL,
+					(errcode(ERRCODE_CONNECTION_FAILURE),
+					 errmsg("connection to client lost")));
 	}
 
 	/*
@@ -3043,9 +3102,20 @@ ProcessInterrupts(void)
 	{
 		/* Has the timeout setting changed since last we looked? */
 		if (IdleInTransactionSessionTimeout > 0)
-			ereport(FATAL,
-					(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
-					 errmsg("terminating connection due to idle-in-transaction timeout")));
+		{
+			if (ActiveSession)
+			{
+				IdleInTransactionSessionTimeoutPending = false;
+				IdleInTransactionSessionError = true;
+				ereport(ERROR,
+						(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+						 errmsg("canceling current transaction due to idle-in-transaction timeout")));
+			}
+			else
+				ereport(FATAL,
+						(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+						 errmsg("terminating connection due to idle-in-transaction timeout")));
+		}
 		else
 			IdleInTransactionSessionTimeoutPending = false;
 
@@ -3605,6 +3675,106 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
+#define ACTIVE_SESSION_MAGIC  0xDEFA1234U
+#define REMOVED_SESSION_MAGIC 0xDEADDEEDU
+
+static int nActiveSessions = 0;
+
+static SessionContext *
+CreateSession(void)
+{
+	SessionContext *session = (SessionContext *)
+		MemoryContextAllocZero(SessionPool->mcxt, sizeof(SessionContext));
+
+	session->memory = AllocSetContextCreate(SessionPool->mcxt,
+		"SessionMemoryContext", ALLOCSET_DEFAULT_SIZES);
+	session->prepared_queries = NULL;
+	session->id = CreateSessionId();
+	session->portals = CreatePortalsHashTable(session->memory);
+	session->magic = ACTIVE_SESSION_MAGIC;
+	session->eventPos = -1;
+	nActiveSessions += 1;
+	return session;
+}
+
+static void
+SwitchToSession(SessionContext *session)
+{
+	/* epoll may return even for already closed session if socket is still openned.
+	 * From epoll documentation:
+	 * Q6  Will closing a file descriptor cause it to be removed from all epoll sets automatically?
+	 *
+     * A6  Yes, but be aware of the following point.  A file descriptor is a reference to an open file description (see
+     *     open(2)).  Whenever a descriptor is duplicated via dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new file
+     *     descriptor referring to the same open file description is created.  An open file  description  continues  to
+     *     exist  until  all  file  descriptors referring to it have been closed.  A file descriptor is removed from an
+     *     epoll set only after all the file descriptors referring to the underlying open file  description  have  been
+     *     closed  (or  before  if  the descriptor is explicitly removed using epoll_ctl(2) EPOLL_CTL_DEL).  This means
+     *     that even after a file descriptor that is part of an epoll set has been closed, events may be  reported  for
+     *     that  file  descriptor  if  other  file descriptors referring to the same underlying file description remain
+     *     open.
+     *
+     *     Using this check for valid magic field we try to ignore such events.
+	 */
+	if (ActiveSession == session || session->magic != ACTIVE_SESSION_MAGIC)
+		return;
+
+	SaveSessionVariables(ActiveSession);
+	RestoreSessionGUCs(ActiveSession);
+	ActiveSession = session;
+
+	MyProcPort = ActiveSession->port;
+	SetTempNamespaceState(ActiveSession->tempNamespace,
+						  ActiveSession->tempToastNamespace);
+	pq_set_current_state(session->port->pqcomm_state, session->port,
+						 session->eventSet);
+	whereToSendOutput = DestRemote;
+
+	RestoreSessionGUCs(ActiveSession);
+	LoadSessionVariables(ActiveSession);
+}
+
+static void
+ResetCurrentSession(void)
+{
+	if (!ActiveSession)
+		return;
+
+	whereToSendOutput = DestNone;
+	DeleteSession(ActiveSession);
+	pq_set_current_state(NULL, NULL, NULL);
+	SetTempNamespaceState(InvalidOid, InvalidOid);
+	ActiveSession = NULL;
+}
+
+/*
+ * Free all memory associated with session and delete session object itself.
+ */
+static void
+DeleteSession(SessionContext *session)
+{
+	elog(DEBUG1, "Delete session %p, id=%u,  memory context=%p",
+			session, session->id, session->memory);
+
+	if (OidIsValid(session->tempNamespace))
+		ResetTempTableNamespace(session->tempNamespace);
+
+	MyProc->nFinishedSessions += 1;
+	nActiveSessions -= 1;
+
+	DropAllPreparedStatements();
+	if (session->eventPos >= 0)
+		DeleteWaitEventFromSet(SessionPool->waitEvents, session->eventPos);
+	FreeWaitEventSet(session->eventSet);
+	RestoreSessionGUCs(session);
+	ReleaseSessionGUCs(session);
+	MemoryContextDelete(session->memory);
+	session->magic = REMOVED_SESSION_MAGIC;
+	pfree(session);
+
+	on_shmem_exit_reset();
+	pgstat_report_stat(true);
+}
 
 /* ----------------------------------------------------------------
  * PostgresMain
@@ -3627,6 +3797,10 @@ PostgresMain(int argc, char *argv[],
 	sigjmp_buf	local_sigjmp_buf;
 	volatile bool send_ready_for_query = true;
 	bool		disable_idle_in_transaction_timeout = false;
+	WaitEvent*  ready_clients = NULL;
+	int         n_ready_clients = 0;
+	int         ready_client_index = 0;
+	int         max_events = 0;
 
 	/* Initialize startup process environment if necessary. */
 	if (!IsUnderPostmaster)
@@ -3656,6 +3830,35 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Serve all conections to "postgres" database by dedicated backends */
+	if (IsDedicatedBackend)
+	{
+		SessionPoolSize = 0;
+		closesocket(SessionPoolSock);
+		SessionPoolSock = PGINVALID_SOCKET;
+	}
+
+	if (IsUnderPostmaster && !IsDedicatedBackend)
+	{
+		elog(DEBUG1, "Session pooling is active on %s database", dbname);
+
+		/* Initialize sessions pool for this backend */
+		Assert(SessionPool == NULL);
+		SessionPool = (BackendSessionPool *) MemoryContextAllocZero(
+				TopMemoryContext, sizeof(BackendSessionPool));
+		SessionPool->mcxt = AllocSetContextCreate(TopMemoryContext,
+			"SessionPoolContext", ALLOCSET_DEFAULT_SIZES);
+
+		/* Save the original backend port here */
+		SessionPool->backendPort = MyProcPort;
+
+		ActiveSession = CreateSession();
+		ActiveSession->port = MyProcPort;
+		ActiveSession->eventSet = pq_get_current_waitset();
+		max_events = MaxSessions + 3; /* 3 extra events are used for listening postmaster socket, MyLatch and postmaster death watchdog */
+		ready_clients = (WaitEvent*) MemoryContextAlloc(TopMemoryContext, sizeof(WaitEvent)*max_events);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3784,7 +3987,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -3922,7 +4125,8 @@ PostgresMain(int argc, char *argv[],
 		pq_comm_reset();
 
 		/* Report the error to the client and/or server log */
-		EmitErrorReport();
+		if (MyProcPort)
+			EmitErrorReport();
 
 		/*
 		 * Make sure debug_query_string gets reset before we possibly clobber
@@ -3982,13 +4186,27 @@ PostgresMain(int argc, char *argv[],
 		 * messages from the client, so there isn't much we can do with the
 		 * connection anymore.
 		 */
-		if (pq_is_reading_msg())
+		if (pq_is_reading_msg() && !ActiveSession)
 			ereport(FATAL,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("terminating connection because protocol synchronization was lost")));
 
 		/* Now we can allow interrupts again */
 		RESUME_INTERRUPTS();
+
+		if (ActiveSession)
+		{
+			whereToSendOutput = DestRemote;
+			if (IdleInTransactionSessionError || (IsAbortedTransactionBlockState() && pq_is_reading_msg()))
+			{
+				StartTransactionCommand();
+				UserAbortTransactionBlock();
+				CommitTransactionCommand();
+				IdleInTransactionSessionError = false;
+			}
+			if (pq_is_reading_msg())
+				goto CloseSession;
+		}
 	}
 
 	/* We can now handle ereport(ERROR) */
@@ -3997,10 +4215,30 @@ PostgresMain(int argc, char *argv[],
 	if (!ignore_till_sync)
 		send_ready_for_query = true;	/* initially, or after error */
 
+
+	/* Initialize wait event set if we're using sessions pool */
+	if (SessionPool && SessionPool->waitEvents == NULL)
+	{
+		/* Construct wait event set if not constructed yet */
+		SessionPool->waitEvents = CreateWaitEventSet(SessionPool->mcxt, max_events);
+		/* Add event to detect postmaster death */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_POSTMASTER_DEATH,
+				PGINVALID_SOCKET, NULL, ActiveSession);
+		/* Add event for backends latch */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_LATCH_SET,
+				PGINVALID_SOCKET, MyLatch, ActiveSession);
+		/* Add event for accepting new sessions */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+				SessionPoolSock, NULL, ActiveSession);
+		/* Add event for current session */
+		ActiveSession->eventPos = AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+				ActiveSession->port->sock, NULL, ActiveSession);
+		SaveSessionVariables(&DefaultContext);
+	}
+
 	/*
 	 * Non-error queries loop here.
 	 */
-
 	for (;;)
 	{
 		/*
@@ -4076,6 +4314,140 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling,
+			 * rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET
+					&& !IsTransactionState()
+					&& !IsAbortedTransactionBlockState()
+					&& pq_available_bytes() == 0)
+			{
+				WaitEvent*  ready_client;
+
+			  ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */
+				if (ready_client_index == n_ready_clients)
+				{
+					n_ready_clients = WaitEventSetWait(SessionPool->waitEvents, -1,
+													   ready_clients, max_events, PG_WAIT_CLIENT);
+					if (n_ready_clients < 1)
+					{
+						/* TODO: do some error recovery here */
+						elog(FATAL, "Failed to poll client sessions");
+					}
+					ready_client_index = 0;
+					MyProc->nSessionSchedules += 1;
+					MyProc->nReadySessions += n_ready_clients;
+				}
+				ready_client = &ready_clients[ready_client_index++];
+
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client->events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client->events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client->fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock == PGINVALID_SOCKET)
+						elog(ERROR, "Failed to receive session socket: %m");
+
+					session = CreateSession();
+
+					/* Initialize port and wait event set for this session */
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					MyProcPort = port = palloc(sizeof(Port));
+					memcpy(port, SessionPool->backendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be
+					 * a cancel request packet).
+					 */
+					port->sock = sock;
+					port->pqcomm_state = pq_init(session->memory);
+
+					session->port = port;
+					session->eventSet =
+						pq_create_backend_event_set(session->memory, port, false);
+					pq_set_current_state(session->port->pqcomm_state,
+										 port,
+										 session->eventSet);
+					whereToSendOutput = DestRemote;
+
+					MemoryContextSwitchTo(oldcontext);
+
+					session->eventPos = AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+														  sock, NULL, session);
+					if (session->eventPos < 0)
+					{
+						elog(WARNING, "Too much pooled sessions: %d", MaxSessions);
+						DeleteSession(session);
+						ActiveSession = NULL;
+						closesocket(sock);
+						goto ChooseSession;
+					}
+
+					elog(DEBUG1, "Start new session %d in backend %d "
+						"for database %s user %s", (int)sock, MyProcPid,
+						port->database_name, port->user_name);
+
+					SaveSessionVariables(ActiveSession);
+					RestoreSessionGUCs(ActiveSession);
+					ActiveSession = session;
+					InitializeSessionVariables(session);
+					LoadSessionVariables(session);
+					SetCurrentStatementStartTimestamp();
+					StartTransactionCommand();
+					PerformAuthentication(MyProcPort);
+					process_settings(MyDatabaseId, GetSessionUserId());
+					CommitTransactionCommand();
+					SetTempNamespaceState(InvalidOid, InvalidOid);
+
+					/*
+					 * Send GUC options to the client
+					 */
+					BeginReportingGUCOptions();
+
+					/*
+					 * Send this backend's cancellation info to the frontend.
+					 */
+					pq_beginmessage(&buf, 'K');
+					pq_sendint(&buf, (int32) MyProcPid, 4);
+					pq_sendint(&buf, (int32) MyCancelKey, 4);
+					pq_endmessage(&buf);
+
+					/* Need not flush since ReadyForQuery will do it. */
+					send_ready_for_query = true;
+
+					continue;
+				}
+				else
+				{
+					SessionContext* session = (SessionContext *) ready_client->user_data;
+					SwitchToSession(session);
+				}
+			}
 		}
 
 		/*
@@ -4118,6 +4490,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (ActiveSession && RestartPoolerOnReload)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
@@ -4355,6 +4729,50 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+					pgsocket sock;
+
+				  CloseSession:
+					sock = PGINVALID_SOCKET;
+
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					if (MyProcPort)
+					{
+						elog(DEBUG1, "Closing session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+						pq_getmsgend(&input_message);
+						if (pq_is_reading_msg())
+							pq_endmsgread();
+
+						sock = MyProcPort->sock;
+						MyProcPort->sock = PGINVALID_SOCKET;
+						MyProcPort = NULL;
+					}
+					if (ActiveSession)
+					{
+						StartTransactionCommand();
+						UserAbortTransactionBlock();
+						CommitTransactionCommand();
+
+						ResetCurrentSession();
+					}
+					if (sock != PGINVALID_SOCKET)
+						closesocket(sock);
+
+					/* Need to perform rescheduling to some other session or accept new session */
+					send_ready_for_query = true;
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
@@ -4618,3 +5036,13 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+Datum pg_backend_load_average(PG_FUNCTION_ARGS)
+{
+	int pid = PG_GETARG_INT64(0);
+	PGPROC* proc = BackendPidGetProc(pid);
+	if (proc == NULL)
+		PG_RETURN_NULL();
+	else
+		PG_RETURN_FLOAT8(proc->nSessionSchedules == 0 ? 0.0 : (double)proc->nReadySessions / proc->nSessionSchedules);
+}
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index e95e347..6726195 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -875,6 +875,17 @@ pg_backend_pid(PG_FUNCTION_ARGS)
 	PG_RETURN_INT32(MyProcPid);
 }
 
+Datum
+pg_session_id(PG_FUNCTION_ARGS)
+{
+	char	*s;
+	if (ActiveSession)
+		s = psprintf("%d.%u", MyProcPid, ActiveSession->id);
+	else
+		s = psprintf("%d", MyProcPid);
+
+	PG_RETURN_TEXT_P(CStringGetTextDatum(s));
+}
 
 Datum
 pg_stat_get_backend_pid(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index 7271b58..6b0cb54 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -61,6 +61,7 @@
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
 #include "storage/lmgr.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/inval.h"
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 6125421..7ce5671 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -78,6 +78,7 @@
 #include "rewrite/rewriteDefine.h"
 #include "rewrite/rowsecurity.h"
 #include "storage/lmgr.h"
+#include "storage/proc.h"
 #include "storage/smgr.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
@@ -1943,6 +1944,13 @@ RelationIdGetRelation(Oid relationId)
 			Assert(rd->rd_isvalid ||
 				   (rd->rd_isnailed && !criticalRelcachesBuilt));
 		}
+		/*
+		 * In case of session pooling, relation descriptor can be constructed by some other session,
+		 * so we need to recheck rd_islocaltemp value
+		 */
+		if (ActiveSession && RELATION_IS_OTHER_TEMP(rd) && isTempOrTempToastNamespace(rd->rd_rel->relnamespace))
+			rd->rd_islocaltemp = true;
+
 		return rd;
 	}
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index f7d6617..cf83123 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -128,7 +128,10 @@ int			max_parallel_maintenance_workers = 2;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
@@ -147,3 +150,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+
+bool        RestartPoolerOnReload = false;
+char       *DedicatedDatabases;
diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c
index 865119d..715429a 100644
--- a/src/backend/utils/init/miscinit.c
+++ b/src/backend/utils/init/miscinit.c
@@ -250,19 +250,6 @@ ChangeToDataDir(void)
  * convenient way to do it.
  * ----------------------------------------------------------------
  */
-static Oid	AuthenticatedUserId = InvalidOid;
-static Oid	SessionUserId = InvalidOid;
-static Oid	OuterUserId = InvalidOid;
-static Oid	CurrentUserId = InvalidOid;
-
-/* We also have to remember the superuser state of some of these levels */
-static bool AuthenticatedUserIsSuperuser = false;
-static bool SessionUserIsSuperuser = false;
-
-static int	SecurityRestrictionContext = 0;
-
-/* We also remember if a SET ROLE is currently active */
-static bool SetRoleIsActive = false;
 
 /*
  * Initialize the basic environment for a postmaster child
@@ -345,13 +332,15 @@ InitStandaloneProcess(const char *argv0)
 void
 SwitchToSharedLatch(void)
 {
+	WaitEventSet *waitset;
 	Assert(MyLatch == &LocalLatchData);
 	Assert(MyProc != NULL);
 
 	MyLatch = &MyProc->procLatch;
 
-	if (FeBeWaitSet)
-		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+	waitset = pq_get_current_waitset();
+	if (waitset)
+		ModifyWaitEvent(waitset, 1, WL_LATCH_SET, MyLatch);
 
 	/*
 	 * Set the shared latch as the local one might have been set. This
@@ -364,13 +353,15 @@ SwitchToSharedLatch(void)
 void
 SwitchBackToLocalLatch(void)
 {
+	WaitEventSet *waitset;
 	Assert(MyLatch != &LocalLatchData);
 	Assert(MyProc != NULL && MyLatch == &MyProc->procLatch);
 
 	MyLatch = &LocalLatchData;
 
-	if (FeBeWaitSet)
-		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+	waitset = pq_get_current_waitset();
+	if (waitset)
+		ModifyWaitEvent(waitset, 1, WL_LATCH_SET, MyLatch);
 
 	SetLatch(MyLatch);
 }
@@ -434,6 +425,8 @@ SetSessionUserId(Oid userid, bool is_superuser)
 	/* We force the effective user IDs to match, too */
 	OuterUserId = userid;
 	CurrentUserId = userid;
+
+	SysCacheInvalidate(AUTHOID, 0);
 }
 
 /*
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 5ef6315..f1d6834 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -62,10 +62,8 @@
 #include "utils/timeout.h"
 #include "utils/tqual.h"
 
-
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -74,7 +72,6 @@ static void LockTimeoutHandler(void);
 static void IdleInTransactionSessionTimeoutHandler(void);
 static bool ThereIsAtLeastOneRole(void);
 static void process_startup_options(Port *port, bool am_superuser);
-static void process_settings(Oid databaseid, Oid roleid);
 
 
 /*** InitPostgres support ***/
@@ -180,7 +177,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
@@ -1126,7 +1123,7 @@ process_startup_options(Port *port, bool am_superuser)
  * We try specific settings for the database/role combination, as well as
  * general for this database and for this user.
  */
-static void
+void
 process_settings(Oid databaseid, Oid roleid)
 {
 	Relation	relsetting;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 0625eff..835dabc 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -59,6 +59,7 @@
 #include "postmaster/autovacuum.h"
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
+#include "postmaster/connpool.h"
 #include "postmaster/postmaster.h"
 #include "postmaster/syslogger.h"
 #include "postmaster/walwriter.h"
@@ -428,6 +429,15 @@ static const struct config_enum_entry password_encryption_options[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
+
 /*
  * Options for enum values stored in other modules
  */
@@ -587,6 +597,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Connection Pooling"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1192,6 +1204,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -1998,8 +2020,41 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
-		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_pool_workers", PGC_POSTMASTER, CONN_POOLING,
+		 gettext_noop("Number of connection pool workers"),
+		 NULL,
+	    },
+		&NumConnPoolWorkers,
+		2, 0, MAX_CONNPOOL_WORKERS,
+		NULL, NULL, NULL
+	},
+
+	{
+	/* see max_connections and max_wal_senders */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
@@ -3340,9 +3395,9 @@ static struct config_string ConfigureNamesString[] =
 
 	{
 		{"temp_tablespaces", PGC_USERSET, CLIENT_CONN_STATEMENT,
-			gettext_noop("Sets the tablespace(s) to use for temporary tables and sort files."),
-			NULL,
-			GUC_LIST_INPUT | GUC_LIST_QUOTE
+		    gettext_noop("Sets the tablespace(s) to use for temporary tables and sort files."),
+		    NULL,
+		    GUC_LIST_INPUT | GUC_LIST_QUOTE
 		},
 		&temp_tablespaces,
 		"",
@@ -3350,6 +3405,16 @@ static struct config_string ConfigureNamesString[] =
 	},
 
 	{
+		{"dedicated_databases", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Set of databases for which session pooling is disabled."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE
+		},
+		&DedicatedDatabases,
+		"template0, template1, postgres"
+	},
+
+	{
 		{"dynamic_library_path", PGC_SUSET, CLIENT_CONN_OTHER,
 			gettext_noop("Sets the path for dynamically loadable modules."),
 			gettext_noop("If a dynamically loadable module needs to be opened and "
@@ -4185,6 +4250,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -5346,6 +5421,164 @@ NewGUCNestLevel(void)
 }
 
 /*
+ * Save changed variables after SET command.
+ * It's important to restore variables as we add them to the list.
+ */
+static void
+SaveSessionGUCs(SessionContext *session,
+				struct config_generic *gconf,
+				config_var_value *prior_val)
+{
+	SessionGUC	*sg;
+
+	/* Find needed GUC in active session */
+	for (sg = session->gucs;
+			sg != NULL && sg->var != gconf; sg = sg->next);
+
+	if (sg != NULL)
+		/* already there */
+		return;
+
+	sg = MemoryContextAllocZero(session->memory, sizeof(SessionGUC));
+	sg->var = gconf;
+	sg->saved.extra = prior_val->extra;
+
+	switch (gconf->vartype)
+	{
+		case PGC_BOOL:
+			sg->saved.val.boolval = prior_val->val.boolval;
+			break;
+		case PGC_INT:
+			sg->saved.val.intval = prior_val->val.intval;
+			break;
+		case PGC_REAL:
+			sg->saved.val.realval = prior_val->val.realval;
+			break;
+		case PGC_STRING:
+			sg->saved.val.stringval = prior_val->val.stringval;
+			break;
+		case PGC_ENUM:
+			sg->saved.val.enumval = prior_val->val.enumval;
+			break;
+	}
+
+	if (session->gucs)
+	{
+		SessionGUC	*latest;
+
+		/* Move to end of the list */
+		for (latest = session->gucs;
+				latest->next != NULL; latest = latest->next);
+		latest->next = sg;
+	}
+	else
+		session->gucs = sg;
+}
+
+/*
+ * Set GUCs for this session
+ */
+void
+RestoreSessionGUCs(SessionContext* session)
+{
+	SessionGUC	*sg;
+	bool save_reporting_enabled;
+
+	if (session == NULL)
+		return;
+
+	save_reporting_enabled = reporting_enabled;
+	reporting_enabled = false;
+
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		void	*saved_extra = sg->saved.extra;
+		void	*old_extra = sg->var->extra;
+
+		/* restore extra */
+		sg->var->extra = saved_extra;
+		sg->saved.extra = old_extra;
+
+		/* restore actual values */
+		switch (sg->var->vartype)
+		{
+			case PGC_BOOL:
+			{
+				struct config_bool *conf = (struct config_bool *)sg->var;
+				bool oldval = *conf->variable;
+				*conf->variable = sg->saved.val.boolval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.boolval, saved_extra);
+
+				sg->saved.val.boolval = oldval;
+				break;
+			}
+			case PGC_INT:
+			{
+				struct config_int *conf = (struct config_int*) sg->var;
+				int oldval = *conf->variable;
+				*conf->variable = sg->saved.val.intval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.intval, saved_extra);
+				sg->saved.val.intval = oldval;
+				break;
+			}
+			case PGC_REAL:
+			{
+				struct config_real *conf = (struct config_real*) sg->var;
+				double oldval = *conf->variable;
+				*conf->variable = sg->saved.val.realval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.realval, saved_extra);
+				sg->saved.val.realval = oldval;
+				break;
+			}
+			case PGC_STRING:
+			{
+				struct config_string *conf = (struct config_string*) sg->var;
+				char* oldval = *conf->variable;
+				*conf->variable = sg->saved.val.stringval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.stringval, saved_extra);
+				sg->saved.val.stringval = oldval;
+				break;
+			}
+			case PGC_ENUM:
+			{
+				struct config_enum *conf = (struct config_enum*) sg->var;
+				int oldval = *conf->variable;
+				*conf->variable = sg->saved.val.enumval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.enumval, saved_extra);
+				sg->saved.val.enumval = oldval;
+				break;
+			}
+		}
+	}
+	reporting_enabled = save_reporting_enabled;
+}
+
+/*
+ * Deallocate memory for session GUCs
+ */
+void
+ReleaseSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		if (sg->saved.extra)
+			set_extra_field(sg->var, &sg->saved.extra, NULL);
+
+		if (sg->var->vartype == PGC_STRING)
+		{
+			struct config_string* conf = (struct config_string*)sg->var;
+			set_string_field(conf, &sg->saved.val.stringval, NULL);
+		}
+	}
+}
+
+/*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
  * transient assignment to some GUC variables.  (The name is thus a bit of
@@ -5413,8 +5646,10 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 					restoreMasked = true;
 				else if (stack->state == GUC_SET)
 				{
-					/* we keep the current active value */
-					discard_stack_value(gconf, &stack->prior);
+					if (ActiveSession)
+						SaveSessionGUCs(ActiveSession, gconf, &stack->prior);
+					else
+						discard_stack_value(gconf, &stack->prior);
 				}
 				else			/* must be GUC_LOCAL */
 					restorePrior = true;
@@ -5440,8 +5675,8 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 
 					case GUC_SET:
 						/* next level always becomes SET */
-						discard_stack_value(gconf, &stack->prior);
-						if (prev->state == GUC_SET_LOCAL)
+					    discard_stack_value(gconf, &stack->prior);
+					    if (prev->state == GUC_SET_LOCAL)
 							discard_stack_value(gconf, &prev->masked);
 						prev->state = GUC_SET;
 						break;
diff --git a/src/backend/utils/misc/superuser.c b/src/backend/utils/misc/superuser.c
index fbe83c9..1ebc379 100644
--- a/src/backend/utils/misc/superuser.c
+++ b/src/backend/utils/misc/superuser.c
@@ -24,6 +24,7 @@
 #include "catalog/pg_authid.h"
 #include "utils/inval.h"
 #include "utils/syscache.h"
+#include "storage/proc.h"
 #include "miscadmin.h"
 
 
@@ -33,8 +34,6 @@
  * the status of the last requested roleid.  The cache can be flushed
  * at need by watching for cache update events on pg_authid.
  */
-static Oid	last_roleid = InvalidOid;	/* InvalidOid == cache not valid */
-static bool last_roleid_is_super = false;
 static bool roleid_callback_registered = false;
 
 static void RoleidCallback(Datum arg, int cacheid, uint32 hashvalue);
diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c
index 04ea32f..a8c27a3 100644
--- a/src/backend/utils/mmgr/portalmem.c
+++ b/src/backend/utils/mmgr/portalmem.c
@@ -23,6 +23,7 @@
 #include "commands/portalcmds.h"
 #include "miscadmin.h"
 #include "storage/ipc.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/snapmgr.h"
@@ -53,11 +54,14 @@ typedef struct portalhashent
 
 static HTAB *PortalHashTable = NULL;
 
+#define CurrentPortalHashTable() \
+	(ActiveSession ? ActiveSession->portals : PortalHashTable)
+
 #define PortalHashTableLookup(NAME, PORTAL) \
 do { \
 	PortalHashEnt *hentry; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   (NAME), HASH_FIND, NULL); \
 	if (hentry) \
 		PORTAL = hentry->portal; \
@@ -69,7 +73,7 @@ do { \
 do { \
 	PortalHashEnt *hentry; bool found; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   (NAME), HASH_ENTER, &found); \
 	if (found) \
 		elog(ERROR, "duplicate portal name"); \
@@ -82,7 +86,7 @@ do { \
 do { \
 	PortalHashEnt *hentry; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   PORTAL->name, HASH_REMOVE, NULL); \
 	if (hentry == NULL) \
 		elog(WARNING, "trying to delete portal name that does not exist"); \
@@ -90,12 +94,33 @@ do { \
 
 static MemoryContext TopPortalContext = NULL;
 
-
 /* ----------------------------------------------------------------
  *				   public portal interface functions
  * ----------------------------------------------------------------
  */
 
+HTAB *
+CreatePortalsHashTable(MemoryContext mcxt)
+{
+	HASHCTL		ctl;
+	int			flags = HASH_ELEM;
+
+	ctl.keysize = MAX_PORTALNAME_LEN;
+	ctl.entrysize = sizeof(PortalHashEnt);
+
+	if (mcxt)
+	{
+		ctl.hcxt = mcxt;
+		flags |= HASH_CONTEXT;
+	}
+
+	/*
+	 * use PORTALS_PER_USER as a guess of how many hash table entries to
+	 * create, initially
+	 */
+	return hash_create("Portal hash", PORTALS_PER_USER, &ctl, flags);
+}
+
 /*
  * EnablePortalManager
  *		Enables the portal management module at backend startup.
@@ -103,23 +128,13 @@ static MemoryContext TopPortalContext = NULL;
 void
 EnablePortalManager(void)
 {
-	HASHCTL		ctl;
-
 	Assert(TopPortalContext == NULL);
 
 	TopPortalContext = AllocSetContextCreate(TopMemoryContext,
-											 "TopPortalContext",
-											 ALLOCSET_DEFAULT_SIZES);
-
-	ctl.keysize = MAX_PORTALNAME_LEN;
-	ctl.entrysize = sizeof(PortalHashEnt);
+										 "TopPortalContext",
+										 ALLOCSET_DEFAULT_SIZES);
 
-	/*
-	 * use PORTALS_PER_USER as a guess of how many hash table entries to
-	 * create, initially
-	 */
-	PortalHashTable = hash_create("Portal hash", PORTALS_PER_USER,
-								  &ctl, HASH_ELEM);
+	PortalHashTable = CreatePortalsHashTable(NULL);
 }
 
 /*
@@ -602,11 +617,14 @@ PortalHashTableDeleteAll(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	if (PortalHashTable == NULL)
+	htab = CurrentPortalHashTable();
+
+	if (htab == NULL)
 		return;
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 	while ((hentry = hash_seq_search(&status)) != NULL)
 	{
 		Portal		portal = hentry->portal;
@@ -619,7 +637,7 @@ PortalHashTableDeleteAll(void)
 
 		/* Restart the iteration in case that led to other drops */
 		hash_seq_term(&status);
-		hash_seq_init(&status, PortalHashTable);
+		hash_seq_init(&status, htab);
 	}
 }
 
@@ -672,8 +690,10 @@ PreCommit_Portals(bool isPrepare)
 	bool		result = false;
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -746,7 +766,7 @@ PreCommit_Portals(bool isPrepare)
 		 * caused a drop of the next portal in the hash chain.
 		 */
 		hash_seq_term(&status);
-		hash_seq_init(&status, PortalHashTable);
+		hash_seq_init(&status, htab);
 	}
 
 	return result;
@@ -763,8 +783,11 @@ AtAbort_Portals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -840,8 +863,11 @@ AtCleanup_Portals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -899,8 +925,10 @@ PortalErrorCleanup(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -927,8 +955,9 @@ AtSubCommit_Portals(SubTransactionId mySubid,
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -962,8 +991,11 @@ AtSubAbort_Portals(SubTransactionId mySubid,
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1072,8 +1104,9 @@ AtSubCleanup_Portals(SubTransactionId mySubid)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1161,7 +1194,7 @@ pg_cursor(PG_FUNCTION_ARGS)
 	/* generate junk in short-term context */
 	MemoryContextSwitchTo(oldcontext);
 
-	hash_seq_init(&hash_seq, PortalHashTable);
+	hash_seq_init(&hash_seq, CurrentPortalHashTable());
 	while ((hentry = hash_seq_search(&hash_seq)) != NULL)
 	{
 		Portal		portal = hentry->portal;
@@ -1200,7 +1233,7 @@ ThereAreNoReadyPortals(void)
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, CurrentPortalHashTable());
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1229,8 +1262,11 @@ HoldPinnedPortals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
diff --git a/src/include/catalog/namespace.h b/src/include/catalog/namespace.h
index 0e20237..ddcc3c8 100644
--- a/src/include/catalog/namespace.h
+++ b/src/include/catalog/namespace.h
@@ -144,7 +144,9 @@ extern void GetTempNamespaceState(Oid *tempNamespaceId,
 					  Oid *tempToastNamespaceId);
 extern void SetTempNamespaceState(Oid tempNamespaceId,
 					  Oid tempToastNamespaceId);
-extern void ResetTempTableNamespace(void);
+
+struct SessionContext;
+extern void ResetTempTableNamespace(Oid npc);
 
 extern OverrideSearchPath *GetOverrideSearchPath(MemoryContext context);
 extern OverrideSearchPath *CopyOverrideSearchPath(OverrideSearchPath *path);
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a146510..0ad559c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5202,6 +5202,9 @@
 { oid => '2026', descr => 'statistics: current backend PID',
   proname => 'pg_backend_pid', provolatile => 's', proparallel => 'r',
   prorettype => 'int4', proargtypes => '', prosrc => 'pg_backend_pid' },
+{ oid => '3436', descr => 'statistics: current session ID',
+  proname => 'pg_session_id', provolatile => 's', proparallel => 'r',
+  prorettype => 'int4', proargtypes => '', prosrc => 'pg_session_id' },
 { oid => '1937', descr => 'statistics: PID of backend',
   proname => 'pg_stat_get_backend_pid', provolatile => 's', proparallel => 'r',
   prorettype => 'int4', proargtypes => 'int4',
@@ -10206,4 +10209,11 @@
   proisstrict => 'f', prorettype => 'bool', proargtypes => 'oid int4 int4 any',
   proargmodes => '{i,i,i,v}', prosrc => 'satisfies_hash_partition' },
 
+
+# Builtin connection pol functions
+{ oid => '6107', descr => 'Session pool backend load average',
+  proname => 'pg_backend_load_average', 
+  provolatile => 'v', prorettype => 'float8', proargtypes => 'int4',
+  prosrc => 'pg_backend_load_average' },
+
 ]
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..fdf1854 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(uint32 sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index ef5528c..bb6d359 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -66,6 +66,7 @@ typedef struct
 #include "datatype/timestamp.h"
 #include "libpq/hba.h"
 #include "libpq/pqcomm.h"
+#include "storage/latch.h"
 
 
 typedef enum CAC_state
@@ -139,6 +140,12 @@ typedef struct Port
 	List	   *guc_options;
 
 	/*
+	 * libpq communication state
+	 */
+	void			*pqcomm_state;
+	WaitEventSet	*pqcomm_waitset;
+
+	/*
 	 * Information that needs to be held during the authentication cycle.
 	 */
 	HbaLine    *hba;
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 36baf6b..10ba28b 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -60,7 +60,12 @@ extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
 extern void RemoveSocketFiles(void);
-extern void pq_init(void);
+extern void *pq_init(MemoryContext mcxt);
+extern void pq_reset(void);
+extern void pq_set_current_state(void *state, Port *port, WaitEventSet *set);
+extern WaitEventSet *pq_get_current_waitset(void);
+extern WaitEventSet *pq_create_backend_event_set(MemoryContext mcxt,
+												 Port *port, bool onlySock);
 extern int	pq_getbytes(char *s, size_t len);
 extern int	pq_getstring(StringInfo s);
 extern void pq_startmsgread(void);
@@ -71,6 +76,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
@@ -96,8 +102,6 @@ extern ssize_t secure_raw_write(Port *port, const void *ptr, size_t len);
 
 extern bool ssl_loaded_verify_locations;
 
-extern WaitEventSet *FeBeWaitSet;
-
 /* GUCs */
 extern char *SSLCipherSuites;
 extern char *SSLECDHCurve;
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index e167ee8..5582542 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -26,6 +26,7 @@
 #include <signal.h>
 
 #include "pgtime.h"				/* for pg_time_t */
+#include "utils/palloc.h"
 
 
 #define InvalidPid				(-1)
@@ -150,6 +151,9 @@ extern PGDLLIMPORT bool IsUnderPostmaster;
 extern PGDLLIMPORT bool IsBackgroundWorker;
 extern PGDLLIMPORT bool IsBinaryUpgrade;
 
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT char* DedicatedDatabases;
+
 extern PGDLLIMPORT bool ExitOnAnyError;
 
 extern PGDLLIMPORT char *DataDir;
@@ -161,7 +165,19 @@ extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int SessionSchedule;
+
 extern PGDLLIMPORT int MyProcPid;
+extern PGDLLIMPORT uint32 MySessionId;
 extern PGDLLIMPORT pg_time_t MyStartTime;
 extern PGDLLIMPORT struct Port *MyProcPort;
 extern PGDLLIMPORT struct Latch *MyLatch;
@@ -335,6 +351,9 @@ extern void SwitchBackToLocalLatch(void);
 extern bool superuser(void);	/* current user is superuser */
 extern bool superuser_arg(Oid roleid);	/* given user is superuser */
 
+/* in utils/init/postinit.c */
+void process_settings(Oid databaseid, Oid roleid);
+
 
 /*****************************************************************************
  *	  pmod.h --																 *
@@ -425,6 +444,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname, bool override_allow_connections);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
@@ -445,6 +465,9 @@ extern void process_session_preload_libraries(void);
 extern void pg_bindtextdomain(const char *domain);
 extern bool has_rolreplication(Oid roleid);
 
+void *GetLocalUserIdStateCopy(MemoryContext mcxt);
+void SetCurrentUserIdState(void *userId);
+
 /* in access/transam/xlog.c */
 extern bool BackupInProgress(void);
 extern void CancelBackup(void);
diff --git a/src/include/port.h b/src/include/port.h
index 74a9dc4..ac53f3c 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index b398cd3..01971bc 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -447,6 +447,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -456,6 +457,7 @@ int			pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen);
 int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout);
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 const char *pgwin32_socket_strerror(int err);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
diff --git a/src/include/postmaster/connpool.h b/src/include/postmaster/connpool.h
new file mode 100644
index 0000000..45aa37c
--- /dev/null
+++ b/src/include/postmaster/connpool.h
@@ -0,0 +1,54 @@
+#ifndef CONN_POOL_H
+#define CONN_POOL_H
+
+#include "port.h"
+#include "libpq/libpq-be.h"
+
+#define MAX_CONNPOOL_WORKERS	100
+
+typedef enum
+{
+	CPW_FREE,
+	CPW_NEW_SOCKET,
+	CPW_PROCESSED
+} ConnPoolWorkerState;
+
+enum CAC_STATE;
+
+typedef struct ConnPoolWorker
+{
+	Port	   *port;		/* port in the pool */
+	int			pipes[2];	/* 0 for sending, 1 for receiving */
+
+	/* the communication procedure:
+	 * ) find a worker with state == CPW_FREE
+	 * ) assign client socket
+	 * ) add pipe to wait set (if it's not there)
+	 * ) wake up the worker.
+	 * ) process data from the worker until state != CPW_PROCESSED
+	 * ) set state to CPW_FREE
+	 * ) fork or send socket and the data to backend.
+	 *
+	 * bgworker
+	 * ) wokes up
+	 * ) check the state
+	 * ) if stats is CPW_NEW_SOCKET gets data from clientsock and
+	 * send the data through pipe to postmaster.
+	 * ) set state to CPW_PROCESSED.
+	 */
+	volatile ConnPoolWorkerState	state;
+	volatile CAC_state				cac_state;
+	pid_t							pid;
+	Latch						   *latch;
+} ConnPoolWorker;
+
+extern Size ConnPoolShmemSize(void);
+extern void ConnectionPoolWorkersInit(void);
+extern void RegisterConnPoolWorkers(void);
+extern void StartupPacketReaderMain(Datum arg);
+
+/* global variables */
+extern int NumConnPoolWorkers;
+extern ConnPoolWorker *ConnPoolWorkers;
+
+#endif
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..1f16836 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,10 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone,
+						MemoryContext memctx, int errlevel);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/ipc.h b/src/include/storage/ipc.h
index 6a05a89..9cddaf9 100644
--- a/src/include/storage/ipc.h
+++ b/src/include/storage/ipc.h
@@ -72,6 +72,7 @@ extern void on_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void before_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void cancel_before_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void on_exit_reset(void);
+extern void on_shmem_exit_reset(void);
 
 /* ipci.c */
 extern PGDLLIMPORT shmem_startup_hook_type shmem_startup_hook;
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index fd8735b..b7902ea 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index cb613c8..29a4de2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -21,6 +21,7 @@
 #include "storage/lock.h"
 #include "storage/pg_sema.h"
 #include "storage/proclist_types.h"
+#include "utils/guc_tables.h"
 
 /*
  * Each backend advertises up to PGPROC_MAX_CACHED_SUBXIDS TransactionIds
@@ -203,6 +204,10 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	int         nFinishedSessions;  /* number of finished sessions in case of connection pooling */
+	uint64      nSessionSchedules;  /* number of session schedule performed by backend (calls of WaitEventSetWait(SessionPool->waitEvents) */
+	uint64      nReadySessions;     /* total number of ready sessions returned by all WaitEventSetWait(SessionPool->waitEvents) calls */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
@@ -276,6 +281,58 @@ extern PGDLLIMPORT PROC_HDR *ProcGlobal;
 
 extern PGPROC *PreparedXactProcs;
 
+typedef struct SessionGUC
+{
+	struct SessionGUC	   *next;
+	config_var_value		saved;
+	struct config_generic  *var;
+} SessionGUC;
+
+/*
+ * Information associated with client session.
+ */
+typedef struct SessionContext
+{
+	uint32          magic;              /* Magic to validate content of session object */
+	uint32			id;					/* session identifier, unique across many backends */
+	/* Memory context used for global session data (instead of TopMemoryContext) */
+	MemoryContext	memory;
+	struct Port*	port;				/* connection port */
+	Oid				tempNamespace;		/* temporary namespace */
+	Oid				tempToastNamespace;	/* temporary toast namespace */
+	SessionGUC	   *gucs;				/* session local GUCs */
+	WaitEventSet   *eventSet;			/* Wait set for the session */
+	int             eventPos;           /* Position of wait socket event for this session */
+	HTAB		   *prepared_queries;	/* Session prepared queries */
+	HTAB		   *portals;			/* Session portals */
+	void		   *userId;				/* Current role state */
+	#define SessionVariable(type,name,init)  type name;
+	#include "storage/sessionvars.h"
+} SessionContext;
+
+#define SessionVariable(type,name,init)  extern type name;
+#include "storage/sessionvars.h"
+
+typedef struct Port Port;
+typedef struct BackendSessionPool
+{
+	MemoryContext	mcxt;
+
+	WaitEventSet   *waitEvents;		/* Set of all sessions sockets */
+	uint32			sessionCount;   /* Number of sessions */
+
+	/*
+	 * Reference to the original port of this backend created when this backend
+	 * was launched. Session using this port may be already terminated,
+	 * but since it is allocated in TopMemoryContext, its content is still
+	 * valid and is used as template for ports of new sessions
+	 */
+	Port		   *backendPort;
+} BackendSessionPool;
+
+extern PGDLLIMPORT SessionContext		*ActiveSession;
+extern PGDLLIMPORT BackendSessionPool	*SessionPool;
+
 /* Accessor for PGPROC given a pgprocno. */
 #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])
 
@@ -295,7 +352,7 @@ extern int	StatementTimeout;
 extern int	LockTimeout;
 extern int	IdleInTransactionSessionTimeout;
 extern bool log_lock_waits;
-
+extern bool IsDedicatedBackend;
 
 /*
  * Function Prototypes
@@ -321,6 +378,7 @@ extern void ProcLockWakeup(LockMethod lockMethodTable, LOCK *lock);
 extern void CheckDeadLockAlert(void);
 extern bool IsWaitingForLock(void);
 extern void LockErrorCleanup(void);
+extern uint32 CreateSessionId(void);
 
 extern void ProcWaitForSignal(uint32 wait_event_info);
 extern void ProcSendSignal(int pid);
diff --git a/src/include/storage/sessionvars.h b/src/include/storage/sessionvars.h
new file mode 100644
index 0000000..690c56f
--- /dev/null
+++ b/src/include/storage/sessionvars.h
@@ -0,0 +1,13 @@
+/* SessionVariable(type,name,init) */
+SessionVariable(Oid, AuthenticatedUserId, InvalidOid)
+SessionVariable(Oid, SessionUserId, InvalidOid)
+SessionVariable(Oid, OuterUserId, InvalidOid)
+SessionVariable(Oid, CurrentUserId, InvalidOid)
+SessionVariable(bool, AuthenticatedUserIsSuperuser, false)
+SessionVariable(bool, SessionUserIsSuperuser, false)
+SessionVariable(int, SecurityRestrictionContext, 0)
+SessionVariable(bool, SetRoleIsActive, false)
+SessionVariable(Oid, last_roleid, InvalidOid)
+SessionVariable(bool, last_roleid_is_super, false)
+SessionVariable(struct SeqTableData*, last_used_seq, NULL)
+#undef SessionVariable
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..51d130c 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -31,9 +31,11 @@
 #define STACK_DEPTH_SLOP (512 * 1024L)
 
 extern CommandDest whereToSendOutput;
+
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index f462eab..338f0ec 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -395,6 +395,12 @@ extern Size EstimateGUCStateSpace(void);
 extern void SerializeGUCState(Size maxsize, char *start_address);
 extern void RestoreGUCState(void *gucstate);
 
+/* Session polling support function */
+struct SessionContext;
+extern void RestoreSessionGUCs(struct SessionContext* session);
+extern void ReleaseSessionGUCs(struct SessionContext* session);
+
+
 /* Support for messages reported from GUC check hooks */
 
 extern PGDLLIMPORT char *GUC_check_errmsg_string;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 668d9ef..e3f2e5a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h
index e4929b9..69ac10d 100644
--- a/src/include/utils/portal.h
+++ b/src/include/utils/portal.h
@@ -202,6 +202,7 @@ typedef struct PortalData
 
 
 /* Prototypes for functions in utils/mmgr/portalmem.c */
+HTAB *CreatePortalsHashTable(MemoryContext mcxt);
 extern void EnablePortalManager(void);
 extern bool PreCommit_Portals(bool isPrepare);
 extern void AtAbort_Portals(void);
#131Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#130)
1 attachment(s)
Re: Built-in connection pooling

On 25.10.2018 11:53, Konstantin Knizhnik wrote:

I continue work on built-in connection pooler.
I have implemented three strategies for splitting sessions between
session pool workers:
- random
- round-robin
- load balancing (choose server with minimal wait queue size)

It is still not fixing the main drawback of the current implementation
of built-in pooler: long transaction or query can block all other
sessions
scheduled to this backend. To prevent such situation we have to
somehow migrate session to some other (idle) backends.
Unfortunately session should take with it a lot of "luggage":
serialized GUCs, prepared statements and, worst of all, temporary tables.
If first two in principle can be handled, what to do with temporary
table is unclear.

Frankly speaking I think that implementation of temporary tables
in Postgres has to be rewritten in any case. Them are causing catalog
blow, can not be used in parallel queries,...
May be in case of such rewriting of temporary tables implementation
them can be better marries with built-on connection pooler.
But right now sessions can not be migrated.

Updated version of builtin connection pooler fixing  issue with open
file descriptors limit exhaustion.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

session_pool-12.patchtext/x-patch; name=session_pool-12.patchDownload
diff --git a/contrib/test_decoding/sql/messages.sql b/contrib/test_decoding/sql/messages.sql
index cf3f773..14c4163 100644
--- a/contrib/test_decoding/sql/messages.sql
+++ b/contrib/test_decoding/sql/messages.sql
@@ -23,6 +23,8 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'for
 
 -- test db filtering
 \set prevdb :DBNAME
+show session_pool_size;
+show session_pool_ports;
 \c template1
 
 SELECT 'otherdb1' FROM pg_logical_emit_message(false, 'test', 'otherdb1');
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index bee4afb..061b67a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -703,6 +703,125 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-max-sessions" xreflabel="max_sessions">
+      <term><varname>max_sessions</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>max_sessions</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          The maximum number of client sessions that can be handled by
+          one backend when session pooling is switched on.
+          This parameter does not add any memory or CPU overhead, so
+          specifying a large <varname>max_sessions</varname> value
+          does not affect performance.
+          If the <varname>max_sessions</varname> limit is reached,
+          the backend stops accepting connections. Until one of the
+          connections is terminated, attempts to connect to this
+          backend result in an error.
+        </para>
+        <para>
+          The default value is 1000. This parameter can only be set at server start.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-pool-size" xreflabel="session_pool_size">
+      <term><varname>session_pool_size</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>session_pool_size</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Enables session pooling and defines the maximum number of
+          backends that can be used by client sessions for each database/user combination.
+          Launched backends are never terminated even if there are no active sessions.
+        </para>
+        <para>
+          The default value is zero, so session pooling is disabled.
+        </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-connection-pool-workers" xreflabel="connection_pool_workers">
+      <term><varname>connection_pool_workers</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>connection_pool_workers</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Number of connection listeners used to read client startup packages.
+          If session pooling is enabled, <productname>&productname;</productname>
+          server redirects all client startup packages to a connection listener.
+          The listener determines the database and user that the client needs
+          to access and redirects the connection to an appropriate backend,
+          which is selected from the pool using the round-robin algorithm.
+          This approach allows to avoid server slowdown if a client tries
+          to connect via a slow or unreliable network.
+        </para>
+        <para>
+          The default value is 2.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-dedicated-databases" xreflabel="dedicated_databases">
+      <term><varname>dedicated_databases</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>dedicated_databases</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies the list of databases for which session pooling is disabled.
+          For such databases, a separate backend is forked for each connection.
+          By default, session pooling is disabled for <literal>template0</literal>,
+          <literal>template1</literal>, and <literal>postgres</literal> databases.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-restart-pooler-on-reload" xreflabel="restart_pooler_on_reload">
+      <term><varname>restart_pooler_on_reload</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>restart_pooler_on_reload</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Restart session pool workers once <function>pg_reload_conf()</function> is called.
+          The default value is <literal>false</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-session-schedule" xreflabel="session_schedule">
+      <term><varname>session_schedule</varname> (<type>enum</type>)
+      <indexterm>
+       <primary><varname>session_schedule</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+        <para>
+          Specifies scheduling policy for assigning session to backend in case of
+          connection pooling. Default policy is <literal>round-robin</literal>.
+        </para>
+        <para>
+          With <literal>round-robin</literal> policy postmaster cyclicly scatter sessions between session pool backends.
+        </para>
+        <para>
+          With <literal>random</literal> policy postmaster randomly choose backend in session pool.
+        </para>
+        <para>
+          With <literal>load-balancing</literal> policy postmaster choose backend with lowest load average.
+          Load average of backend is estimated by number of ready events at each reschedule iteration.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories">
       <term><varname>unix_socket_directories</varname> (<type>string</type>)
       <indexterm>
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 5d13e6a..5a93c7e 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -178,7 +178,6 @@ static List *overrideStack = NIL;
  * committed its creation, depending on whether myTempNamespace is valid.
  */
 static Oid	myTempNamespace = InvalidOid;
-
 static Oid	myTempToastNamespace = InvalidOid;
 
 static SubTransactionId myTempNamespaceSubID = InvalidSubTransactionId;
@@ -193,6 +192,7 @@ char	   *namespace_search_path = NULL;
 /* Local functions */
 static void recomputeNamespacePath(void);
 static void InitTempTableNamespace(void);
+static Oid  GetTempTableNamespace(void);
 static void RemoveTempRelations(Oid tempNamespaceId);
 static void RemoveTempRelationsCallback(int code, Datum arg);
 static void NamespaceCallback(Datum arg, int cacheid, uint32 hashvalue);
@@ -460,9 +460,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (strcmp(newRelation->schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(newRelation->schemaname, false);
@@ -471,9 +469,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 	else if (newRelation->relpersistence == RELPERSISTENCE_TEMP)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 	else
 	{
@@ -482,8 +478,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -2921,9 +2916,7 @@ LookupCreationNamespace(const char *nspname)
 	if (strcmp(nspname, "pg_temp") == 0)
 	{
 		/* Initialize temp namespace if first time through */
-		if (!OidIsValid(myTempNamespace))
-			InitTempTableNamespace();
-		return myTempNamespace;
+		return GetTempTableNamespace();
 	}
 
 	namespaceId = get_namespace_oid(nspname, false);
@@ -2986,9 +2979,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (strcmp(schemaname, "pg_temp") == 0)
 		{
 			/* Initialize temp namespace if first time through */
-			if (!OidIsValid(myTempNamespace))
-				InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		/* use exact schema given */
 		namespaceId = get_namespace_oid(schemaname, false);
@@ -3001,8 +2992,7 @@ QualifiedNameGetCreationNamespace(List *names, char **objname_p)
 		if (activeTempCreationPending)
 		{
 			/* Need to initialize temp namespace */
-			InitTempTableNamespace();
-			return myTempNamespace;
+			return GetTempTableNamespace();
 		}
 		namespaceId = activeCreationNamespace;
 		if (!OidIsValid(namespaceId))
@@ -3254,16 +3244,28 @@ int
 GetTempNamespaceBackendId(Oid namespaceId)
 {
 	int			result;
-	char	   *nspname;
+	char	   *nspname,
+			   *addlevel;
 
 	/* See if the namespace name starts with "pg_temp_" or "pg_toast_temp_" */
 	nspname = get_namespace_name(namespaceId);
 	if (!nspname)
 		return InvalidBackendId;	/* no such namespace? */
 	if (strncmp(nspname, "pg_temp_", 8) == 0)
-		result = atoi(nspname + 8);
+	{
+		/* check for session id */
+		if ((addlevel = strstr(nspname + 8, "_")) != NULL)
+			result = atoi(addlevel + 1);
+		else
+			result = atoi(nspname + 8);
+	}
 	else if (strncmp(nspname, "pg_toast_temp_", 14) == 0)
-		result = atoi(nspname + 14);
+	{
+		if ((addlevel = strstr(nspname + 14, "_")) != NULL)
+			result = atoi(addlevel + 1);
+		else
+			result = atoi(nspname + 14);
+	}
 	else
 		result = InvalidBackendId;
 	pfree(nspname);
@@ -3309,8 +3311,11 @@ void
 SetTempNamespaceState(Oid tempNamespaceId, Oid tempToastNamespaceId)
 {
 	/* Worker should not have created its own namespaces ... */
-	Assert(myTempNamespace == InvalidOid);
-	Assert(myTempToastNamespace == InvalidOid);
+	if (!ActiveSession)
+	{
+		Assert(myTempNamespace == InvalidOid);
+		Assert(myTempToastNamespace == InvalidOid);
+	}
 	Assert(myTempNamespaceSubID == InvalidSubTransactionId);
 
 	/* Assign same namespace OIDs that leader has */
@@ -3830,6 +3835,24 @@ recomputeNamespacePath(void)
 	list_free(oidlist);
 }
 
+static Oid
+GetTempTableNamespace(void)
+{
+	if (ActiveSession)
+	{
+		if (!OidIsValid(ActiveSession->tempNamespace))
+			InitTempTableNamespace();
+		else
+			myTempNamespace = ActiveSession->tempNamespace;
+	}
+	else
+	{
+		if (!OidIsValid(myTempNamespace))
+			InitTempTableNamespace();
+	}
+	return myTempNamespace;
+}
+
 /*
  * InitTempTableNamespace
  *		Initialize temp table namespace on first use in a particular backend
@@ -3841,8 +3864,6 @@ InitTempTableNamespace(void)
 	Oid			namespaceId;
 	Oid			toastspaceId;
 
-	Assert(!OidIsValid(myTempNamespace));
-
 	/*
 	 * First, do permission check to see if we are authorized to make temp
 	 * tables.  We use a nonstandard error message here since "databasename:
@@ -3881,7 +3902,12 @@ InitTempTableNamespace(void)
 				(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
 				 errmsg("cannot create temporary tables during a parallel operation")));
 
-	snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d", MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d_%u",
+					ActiveSession->id, MyBackendId);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_temp_%d",
+					MyBackendId);
 
 	namespaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(namespaceId))
@@ -3913,8 +3939,12 @@ InitTempTableNamespace(void)
 	 * it. (We assume there is no need to clean it out if it does exist, since
 	 * dropping a parent table should make its toast table go away.)
 	 */
-	snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d",
-			 MyBackendId);
+	if (ActiveSession)
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%d_%u",
+					ActiveSession->id, MyBackendId);
+	else
+		snprintf(namespaceName, sizeof(namespaceName), "pg_toast_temp_%u",
+					MyBackendId);
 
 	toastspaceId = get_namespace_oid(namespaceName, true);
 	if (!OidIsValid(toastspaceId))
@@ -3945,6 +3975,11 @@ InitTempTableNamespace(void)
 	 */
 	MyProc->tempNamespaceId = namespaceId;
 
+	if (ActiveSession)
+	{
+		ActiveSession->tempNamespace = namespaceId;
+		ActiveSession->tempToastNamespace = toastspaceId;
+	}
 	/* It should not be done already. */
 	AssertState(myTempNamespaceSubID == InvalidSubTransactionId);
 	myTempNamespaceSubID = GetCurrentSubTransactionId();
@@ -3974,6 +4009,11 @@ AtEOXact_Namespace(bool isCommit, bool parallel)
 		{
 			myTempNamespace = InvalidOid;
 			myTempToastNamespace = InvalidOid;
+			if (ActiveSession)
+			{
+				ActiveSession->tempNamespace = InvalidOid;
+			   	ActiveSession->tempToastNamespace = InvalidOid;
+  	  		}
 			baseSearchPathValid = false;	/* need to rebuild list */
 
 			/*
@@ -4121,13 +4161,16 @@ RemoveTempRelations(Oid tempNamespaceId)
 static void
 RemoveTempRelationsCallback(int code, Datum arg)
 {
-	if (OidIsValid(myTempNamespace))	/* should always be true */
+	Oid		tempNamespace = ActiveSession ?
+		ActiveSession->tempNamespace : myTempNamespace;
+
+	if (OidIsValid(tempNamespace))	/* should always be true */
 	{
 		/* Need to ensure we have a usable transaction. */
 		AbortOutOfAnyTransaction();
 		StartTransactionCommand();
 
-		RemoveTempRelations(myTempNamespace);
+		RemoveTempRelations(tempNamespace);
 
 		CommitTransactionCommand();
 	}
@@ -4137,10 +4180,19 @@ RemoveTempRelationsCallback(int code, Datum arg)
  * Remove all temp tables from the temporary namespace.
  */
 void
-ResetTempTableNamespace(void)
+ResetTempTableNamespace(Oid npc)
 {
-	if (OidIsValid(myTempNamespace))
-		RemoveTempRelations(myTempNamespace);
+	if (OidIsValid(npc))
+	{
+		AbortOutOfAnyTransaction();
+		StartTransactionCommand();
+		RemoveTempRelations(npc);
+		CommitTransactionCommand();
+	}
+	else
+		/* global */
+		if (OidIsValid(myTempNamespace))
+			RemoveTempRelations(myTempNamespace);
 }
 
 
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index e123691..23ff527 100644
--- a/src/backend/catalog/pg_db_role_setting.c
+++ b/src/backend/catalog/pg_db_role_setting.c
@@ -16,6 +16,7 @@
 #include "catalog/indexing.h"
 #include "catalog/objectaccess.h"
 #include "catalog/pg_db_role_setting.h"
+#include "storage/proc.h"
 #include "utils/fmgroids.h"
 #include "utils/rel.h"
 #include "utils/tqual.h"
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index 5df4382..f57a950 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -24,6 +24,7 @@
 #include "access/xlog.h"
 #include "access/xloginsert.h"
 #include "access/xlogutils.h"
+#include "catalog/namespace.h"
 #include "catalog/storage.h"
 #include "catalog/storage_xlog.h"
 #include "storage/freespace.h"
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 9bc67ce..3c90f8d 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2447,7 +2447,7 @@ CopyFrom(CopyState cstate)
 		 * registers the snapshot it uses.
 		 */
 		InvalidateCatalogSnapshot();
-		if (!ThereAreNoPriorRegisteredSnapshots() || !ThereAreNoReadyPortals())
+		if (!ThereAreNoPriorRegisteredSnapshots() || (SessionPoolSize == 0 && !ThereAreNoReadyPortals()))
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 					 errmsg("cannot perform FREEZE because of prior transaction activity")));
diff --git a/src/backend/commands/discard.c b/src/backend/commands/discard.c
index 01a999c..363a52a 100644
--- a/src/backend/commands/discard.c
+++ b/src/backend/commands/discard.c
@@ -45,7 +45,7 @@ DiscardCommand(DiscardStmt *stmt, bool isTopLevel)
 			break;
 
 		case DISCARD_TEMP:
-			ResetTempTableNamespace();
+			ResetTempTableNamespace(InvalidOid);
 			break;
 
 		default:
@@ -73,6 +73,6 @@ DiscardAll(bool isTopLevel)
 	Async_UnlistenAll();
 	LockReleaseAll(USER_LOCKMETHOD, true);
 	ResetPlanCache();
-	ResetTempTableNamespace();
+	ResetTempTableNamespace(InvalidOid);
 	ResetSequenceCaches();
 }
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..1696500 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -30,9 +30,11 @@
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "rewrite/rewriteHandler.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/builtins.h"
+#include "utils/memutils.h"
 #include "utils/snapmgr.h"
 #include "utils/timestamp.h"
 
@@ -43,9 +45,7 @@
  * The keys for this hash table are the arguments to PREPARE and EXECUTE
  * (statement names); the entries are PreparedStatement structs.
  */
-static HTAB *prepared_queries = NULL;
-
-static void InitQueryHashTable(void);
+static HTAB *InitQueryHashTable(MemoryContext mcxt);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
 static Datum build_regtype_array(Oid *param_types, int num_params);
@@ -427,20 +427,43 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
 /*
  * Initialize query hash table upon first use.
  */
-static void
-InitQueryHashTable(void)
+static HTAB *
+InitQueryHashTable(MemoryContext mcxt)
 {
-	HASHCTL		hash_ctl;
+	HTAB		   *res;
+	MemoryContext	old_mcxt;
+	HASHCTL			hash_ctl;
 
 	MemSet(&hash_ctl, 0, sizeof(hash_ctl));
 
 	hash_ctl.keysize = NAMEDATALEN;
 	hash_ctl.entrysize = sizeof(PreparedStatement);
+	hash_ctl.hcxt = mcxt;
+
+	old_mcxt = MemoryContextSwitchTo(mcxt);
+	res = hash_create("Prepared Queries", 32, &hash_ctl, HASH_ELEM | HASH_CONTEXT);
+	MemoryContextSwitchTo(old_mcxt);
 
-	prepared_queries = hash_create("Prepared Queries",
-								   32,
-								   &hash_ctl,
-								   HASH_ELEM);
+	return res;
+}
+
+static HTAB *
+get_prepared_queries_htab(bool init)
+{
+	static HTAB *prepared_queries = NULL;
+
+	if (ActiveSession)
+	{
+		if (init && !ActiveSession->prepared_queries)
+			ActiveSession->prepared_queries = InitQueryHashTable(ActiveSession->memory);
+		return ActiveSession->prepared_queries;
+	}
+
+	/* Initialize the global hash table, if necessary */
+	if (init && !prepared_queries)
+		prepared_queries = InitQueryHashTable(TopMemoryContext);
+
+	return prepared_queries;
 }
 
 /*
@@ -458,12 +481,9 @@ StorePreparedStatement(const char *stmt_name,
 	TimestampTz cur_ts = GetCurrentStatementStartTimestamp();
 	bool		found;
 
-	/* Initialize the hash table, if necessary */
-	if (!prepared_queries)
-		InitQueryHashTable();
 
 	/* Add entry to hash table */
-	entry = (PreparedStatement *) hash_search(prepared_queries,
+	entry = (PreparedStatement *) hash_search(get_prepared_queries_htab(true),
 											  stmt_name,
 											  HASH_ENTER,
 											  &found);
@@ -495,13 +515,14 @@ PreparedStatement *
 FetchPreparedStatement(const char *stmt_name, bool throwError)
 {
 	PreparedStatement *entry;
+	HTAB			  *queries = get_prepared_queries_htab(false);
 
 	/*
 	 * If the hash table hasn't been initialized, it can't be storing
 	 * anything, therefore it couldn't possibly store our plan.
 	 */
-	if (prepared_queries)
-		entry = (PreparedStatement *) hash_search(prepared_queries,
+	if (queries)
+		entry = (PreparedStatement *) hash_search(queries,
 												  stmt_name,
 												  HASH_FIND,
 												  NULL);
@@ -579,7 +600,11 @@ DeallocateQuery(DeallocateStmt *stmt)
 void
 DropPreparedStatement(const char *stmt_name, bool showError)
 {
-	PreparedStatement *entry;
+	PreparedStatement	*entry;
+	HTAB				*queries = get_prepared_queries_htab(false);
+
+	if (!queries)
+		return;
 
 	/* Find the query's hash table entry; raise error if wanted */
 	entry = FetchPreparedStatement(stmt_name, showError);
@@ -590,7 +615,7 @@ DropPreparedStatement(const char *stmt_name, bool showError)
 		DropCachedPlan(entry->plansource);
 
 		/* Now we can remove the hash table entry */
-		hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		hash_search(queries, entry->stmt_name, HASH_REMOVE, NULL);
 	}
 }
 
@@ -602,20 +627,21 @@ DropAllPreparedStatements(void)
 {
 	HASH_SEQ_STATUS seq;
 	PreparedStatement *entry;
+	HTAB			  *queries = get_prepared_queries_htab(false);
 
 	/* nothing cached */
-	if (!prepared_queries)
+	if (!queries)
 		return;
 
 	/* walk over cache */
-	hash_seq_init(&seq, prepared_queries);
+	hash_seq_init(&seq, queries);
 	while ((entry = hash_seq_search(&seq)) != NULL)
 	{
 		/* Release the plancache entry */
 		DropCachedPlan(entry->plansource);
 
 		/* Now we can remove the hash table entry */
-		hash_search(prepared_queries, entry->stmt_name, HASH_REMOVE, NULL);
+		hash_search(queries, entry->stmt_name, HASH_REMOVE, NULL);
 	}
 }
 
@@ -710,10 +736,11 @@ Datum
 pg_prepared_statement(PG_FUNCTION_ARGS)
 {
 	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
-	TupleDesc	tupdesc;
+	TupleDesc		tupdesc;
 	Tuplestorestate *tupstore;
-	MemoryContext per_query_ctx;
-	MemoryContext oldcontext;
+	MemoryContext	per_query_ctx;
+	MemoryContext	oldcontext;
+	HTAB		   *queries;
 
 	/* check to see if caller supports us returning a tuplestore */
 	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
@@ -757,13 +784,13 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
 	/* generate junk in short-term context */
 	MemoryContextSwitchTo(oldcontext);
 
-	/* hash table might be uninitialized */
-	if (prepared_queries)
+	queries = get_prepared_queries_htab(false);
+	if (queries)
 	{
 		HASH_SEQ_STATUS hash_seq;
 		PreparedStatement *prep_stmt;
 
-		hash_seq_init(&hash_seq, prepared_queries);
+		hash_seq_init(&hash_seq, queries);
 		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
 		{
 			Datum		values[5];
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 89122d4..7843d9d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -90,8 +90,6 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
  * last_used_seq is updated by nextval() to point to the last used
  * sequence.
  */
-static SeqTableData *last_used_seq = NULL;
-
 static void fill_seq_with_data(Relation rel, HeapTuple tuple);
 static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
diff --git a/src/backend/libpq/be-secure.c b/src/backend/libpq/be-secure.c
index d349d7c..3afacee 100644
--- a/src/backend/libpq/be-secure.c
+++ b/src/backend/libpq/be-secure.c
@@ -144,6 +144,7 @@ secure_read(Port *port, void *ptr, size_t len)
 {
 	ssize_t		n;
 	int			waitfor;
+	WaitEventSet	*waitset = pq_get_current_waitset();
 
 retry:
 #ifdef USE_SSL
@@ -166,9 +167,9 @@ retry:
 
 		Assert(waitfor);
 
-		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+		ModifyWaitEvent(waitset, 0, waitfor, NULL);
 
-		WaitEventSetWait(FeBeWaitSet, -1 /* no timeout */ , &event, 1,
+		WaitEventSetWait(waitset, -1 /* no timeout */ , &event, 1,
 						 WAIT_EVENT_CLIENT_READ);
 
 		/*
@@ -247,6 +248,7 @@ secure_write(Port *port, void *ptr, size_t len)
 {
 	ssize_t		n;
 	int			waitfor;
+	WaitEventSet	*waitset = pq_get_current_waitset();
 
 retry:
 	waitfor = 0;
@@ -268,9 +270,9 @@ retry:
 
 		Assert(waitfor);
 
-		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+		ModifyWaitEvent(waitset, 0, waitfor, NULL);
 
-		WaitEventSetWait(FeBeWaitSet, -1 /* no timeout */ , &event, 1,
+		WaitEventSetWait(waitset, -1 /* no timeout */ , &event, 1,
 						 WAIT_EVENT_CLIENT_WRITE);
 
 		/* See comments in secure_read. */
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index a4f6d4d..5e33c32 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -13,7 +13,7 @@
  * copy is aborted by an ereport(ERROR), we need to close out the copy so that
  * the frontend gets back into sync.  Therefore, these routines have to be
  * aware of COPY OUT state.  (New COPY-OUT is message-based and does *not*
- * set the DoingCopyOut flag.)
+ * set the is_doing_copyout flag.)
  *
  * NOTE: generally, it's a bad idea to emit outgoing messages directly with
  * pq_putbytes(), especially if the message would require multiple calls
@@ -87,12 +87,14 @@
 #ifdef _MSC_VER					/* mstcpip.h is missing on mingw */
 #include <mstcpip.h>
 #endif
+#include <execinfo.h>
 
 #include "common/ip.h"
 #include "libpq/libpq.h"
 #include "miscadmin.h"
 #include "port/pg_bswap.h"
 #include "storage/ipc.h"
+#include "storage/proc.h"
 #include "utils/guc.h"
 #include "utils/memutils.h"
 
@@ -134,23 +136,6 @@ static List *sock_paths = NIL;
 #define PQ_SEND_BUFFER_SIZE 8192
 #define PQ_RECV_BUFFER_SIZE 8192
 
-static char *PqSendBuffer;
-static int	PqSendBufferSize;	/* Size send buffer */
-static int	PqSendPointer;		/* Next index to store a byte in PqSendBuffer */
-static int	PqSendStart;		/* Next index to send a byte in PqSendBuffer */
-
-static char PqRecvBuffer[PQ_RECV_BUFFER_SIZE];
-static int	PqRecvPointer;		/* Next index to read a byte from PqRecvBuffer */
-static int	PqRecvLength;		/* End of data available in PqRecvBuffer */
-
-/*
- * Message status
- */
-static bool PqCommBusy;			/* busy sending data to the client */
-static bool PqCommReadingMsg;	/* in the middle of reading a message */
-static bool DoingCopyOut;		/* in old-protocol COPY OUT processing */
-
-
 /* Internal functions */
 static void socket_comm_reset(void);
 static void socket_close(int code, Datum arg);
@@ -181,28 +166,55 @@ static PQcommMethods PqCommSocketMethods = {
 	socket_endcopyout
 };
 
-PQcommMethods *PqCommMethods = &PqCommSocketMethods;
+/* These variables used to be global */
+struct PQcommState {
+	Port		   *port;
+	MemoryContext	mcxt;
 
-WaitEventSet *FeBeWaitSet;
+	/* Message status */
+	bool	is_busy;			/* busy sending data to the client */
+	bool	is_reading;			/* in the middle of reading a message */
+	bool	is_doing_copyout;	/* in old-protocol COPY OUT processing */
+	char   *send_buf;
 
+	int		send_bufsize;	/* Size send buffer */
+	int		send_offset;	/* Next index to store a byte in send_buf */
+	int		send_start;		/* Next index to send a byte in send_buf */
 
-/* --------------------------------
- *		pq_init - initialize libpq at backend startup
- * --------------------------------
+	char	recv_buf[PQ_RECV_BUFFER_SIZE];
+	int		recv_offset;	/* Next index to read a byte from pqstate->recv_buf */
+	int		recv_len;		/* End of data available in pqstate->recv_buf */
+
+	/* Wait events set */
+	WaitEventSet *wait_events;
+};
+
+static struct PQcommState *pqstate = NULL;
+PQcommMethods *PqCommMethods = &PqCommSocketMethods;
+
+/*
+ * Create common wait event for a backend
  */
-void
-pq_init(void)
+WaitEventSet *
+pq_create_backend_event_set(MemoryContext mcxt, Port *port,
+							bool onlySock)
 {
-	/* initialize state variables */
-	PqSendBufferSize = PQ_SEND_BUFFER_SIZE;
-	PqSendBuffer = MemoryContextAlloc(TopMemoryContext, PqSendBufferSize);
-	PqSendPointer = PqSendStart = PqRecvPointer = PqRecvLength = 0;
-	PqCommBusy = false;
-	PqCommReadingMsg = false;
-	DoingCopyOut = false;
+	WaitEventSet *result;
+	int				nevents = onlySock ? 1 : 3;
+
+	result = CreateWaitEventSet(mcxt, nevents);
+
+	AddWaitEventToSet(result, WL_SOCKET_WRITEABLE, port->sock,
+					  NULL, NULL);
+
+	if (!onlySock)
+	{
+		AddWaitEventToSet(result, WL_LATCH_SET, -1, MyLatch, NULL);
+		AddWaitEventToSet(result, WL_POSTMASTER_DEATH, -1, NULL, NULL);
 
-	/* set up process-exit hook to close the socket */
-	on_proc_exit(socket_close, 0);
+		/* set up process-exit hook to close the socket */
+		on_proc_exit(socket_close, 0);
+	}
 
 	/*
 	 * In backends (as soon as forked) we operate the underlying socket in
@@ -215,16 +227,65 @@ pq_init(void)
 	 * infinite recursion.
 	 */
 #ifndef WIN32
-	if (!pg_set_noblock(MyProcPort->sock))
+	if (!pg_set_noblock(port->sock))
 		ereport(COMMERROR,
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
-	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
-	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
-					  NULL, NULL);
-	AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
-	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
+	return result;
+}
+
+/* --------------------------------
+ *		pq_init - initialize libpq at backend startup
+ * --------------------------------
+ */
+void *
+pq_init(MemoryContext mcxt)
+{
+	struct PQcommState *state =
+		MemoryContextAllocZero(mcxt, sizeof(struct PQcommState));
+
+	/* initialize state variables */
+	state->mcxt = mcxt;
+
+	state->send_bufsize = PQ_SEND_BUFFER_SIZE;
+	state->send_buf = MemoryContextAlloc(mcxt, state->send_bufsize);
+	state->send_offset = state->send_start = state->recv_offset = state->recv_len = 0;
+	state->is_busy = false;
+	state->is_reading = false;
+	state->is_doing_copyout = false;
+
+	state->wait_events = NULL;
+	return (void *) state;
+}
+
+void
+pq_set_current_state(void *state, Port *port, WaitEventSet *set)
+{
+	pqstate = (struct PQcommState *) state;
+
+	if (pqstate)
+	{
+		pq_reset();
+		pqstate->port = port;
+		pqstate->wait_events = set;
+	}
+}
+
+WaitEventSet *
+pq_get_current_waitset(void)
+{
+	return pqstate ? pqstate->wait_events : NULL;
+}
+
+void
+pq_reset(void)
+{
+	pqstate->send_offset = pqstate->send_start = 0;
+	pqstate->recv_offset = pqstate->recv_len = 0;
+	pqstate->is_busy = false;
+	pqstate->is_reading = false;
+	pqstate->is_doing_copyout = false;
 }
 
 /* --------------------------------
@@ -239,7 +300,7 @@ static void
 socket_comm_reset(void)
 {
 	/* Do not throw away pending data, but do reset the busy flag */
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	/* We can abort any old-style COPY OUT, too */
 	pq_endcopyout(true);
 }
@@ -255,8 +316,8 @@ socket_comm_reset(void)
 static void
 socket_close(int code, Datum arg)
 {
-	/* Nothing to do in a standalone backend, where MyProcPort is NULL. */
-	if (MyProcPort != NULL)
+	/* Nothing to do in a standalone backend, where pqstate->port is NULL. */
+	if (pqstate->port != NULL)
 	{
 #if defined(ENABLE_GSS) || defined(ENABLE_SSPI)
 #ifdef ENABLE_GSS
@@ -267,11 +328,11 @@ socket_close(int code, Datum arg)
 		 * BackendInitialize(), because pg_GSS_recvauth() makes first use of
 		 * "ctx" and "cred".
 		 */
-		if (MyProcPort->gss->ctx != GSS_C_NO_CONTEXT)
-			gss_delete_sec_context(&min_s, &MyProcPort->gss->ctx, NULL);
+		if (pqstate->port->gss->ctx != GSS_C_NO_CONTEXT)
+			gss_delete_sec_context(&min_s, &pqstate->port->gss->ctx, NULL);
 
-		if (MyProcPort->gss->cred != GSS_C_NO_CREDENTIAL)
-			gss_release_cred(&min_s, &MyProcPort->gss->cred);
+		if (pqstate->port->gss->cred != GSS_C_NO_CREDENTIAL)
+			gss_release_cred(&min_s, &pqstate->port->gss->cred);
 #endif							/* ENABLE_GSS */
 
 		/*
@@ -279,14 +340,14 @@ socket_close(int code, Datum arg)
 		 * postmaster child free this, doing so is safe when interrupting
 		 * BackendInitialize().
 		 */
-		free(MyProcPort->gss);
+		free(pqstate->port->gss);
 #endif							/* ENABLE_GSS || ENABLE_SSPI */
 
 		/*
 		 * Cleanly shut down SSL layer.  Nowhere else does a postmaster child
 		 * call this, so this is safe when interrupting BackendInitialize().
 		 */
-		secure_close(MyProcPort);
+		secure_close(pqstate->port);
 
 		/*
 		 * Formerly we did an explicit close() here, but it seems better to
@@ -298,7 +359,7 @@ socket_close(int code, Datum arg)
 		 * We do set sock to PGINVALID_SOCKET to prevent any further I/O,
 		 * though.
 		 */
-		MyProcPort->sock = PGINVALID_SOCKET;
+		pqstate->port->sock = PGINVALID_SOCKET;
 	}
 }
 
@@ -921,12 +982,12 @@ RemoveSocketFiles(void)
 static void
 socket_set_nonblocking(bool nonblocking)
 {
-	if (MyProcPort == NULL)
+	if (pqstate->port == NULL)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),
 				 errmsg("there is no client connection")));
 
-	MyProcPort->noblock = nonblocking;
+	pqstate->port->noblock = nonblocking;
 }
 
 /* --------------------------------
@@ -938,30 +999,30 @@ socket_set_nonblocking(bool nonblocking)
 static int
 pq_recvbuf(void)
 {
-	if (PqRecvPointer > 0)
+	if (pqstate->recv_offset > 0)
 	{
-		if (PqRecvLength > PqRecvPointer)
+		if (pqstate->recv_len > pqstate->recv_offset)
 		{
 			/* still some unread data, left-justify it in the buffer */
-			memmove(PqRecvBuffer, PqRecvBuffer + PqRecvPointer,
-					PqRecvLength - PqRecvPointer);
-			PqRecvLength -= PqRecvPointer;
-			PqRecvPointer = 0;
+			memmove(pqstate->recv_buf, pqstate->recv_buf + pqstate->recv_offset,
+					pqstate->recv_len - pqstate->recv_offset);
+			pqstate->recv_len -= pqstate->recv_offset;
+			pqstate->recv_offset = 0;
 		}
 		else
-			PqRecvLength = PqRecvPointer = 0;
+			pqstate->recv_len = pqstate->recv_offset = 0;
 	}
 
 	/* Ensure that we're in blocking mode */
 	socket_set_nonblocking(false);
 
-	/* Can fill buffer from PqRecvLength and upwards */
+	/* Can fill buffer from pqstate->recv_len and upwards */
 	for (;;)
 	{
 		int			r;
 
-		r = secure_read(MyProcPort, PqRecvBuffer + PqRecvLength,
-						PQ_RECV_BUFFER_SIZE - PqRecvLength);
+		r = secure_read(pqstate->port, pqstate->recv_buf + pqstate->recv_len,
+						PQ_RECV_BUFFER_SIZE - pqstate->recv_len);
 
 		if (r < 0)
 		{
@@ -987,7 +1048,7 @@ pq_recvbuf(void)
 			return EOF;
 		}
 		/* r contains number of bytes read, so just incr length */
-		PqRecvLength += r;
+		pqstate->recv_len += r;
 		return 0;
 	}
 }
@@ -999,14 +1060,14 @@ pq_recvbuf(void)
 int
 pq_getbyte(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	while (PqRecvPointer >= PqRecvLength)
+	while (pqstate->recv_offset >= pqstate->recv_len)
 	{
 		if (pq_recvbuf())		/* If nothing in buffer, then recv some */
 			return EOF;			/* Failed to recv data */
 	}
-	return (unsigned char) PqRecvBuffer[PqRecvPointer++];
+	return (unsigned char) pqstate->recv_buf[pqstate->recv_offset++];
 }
 
 /* --------------------------------
@@ -1018,14 +1079,25 @@ pq_getbyte(void)
 int
 pq_peekbyte(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	while (PqRecvPointer >= PqRecvLength)
+	while (pqstate->recv_offset >= pqstate->recv_len)
 	{
 		if (pq_recvbuf())		/* If nothing in buffer, then recv some */
 			return EOF;			/* Failed to recv data */
 	}
-	return (unsigned char) PqRecvBuffer[PqRecvPointer];
+	return (unsigned char) pqstate->recv_buf[pqstate->recv_offset];
+}
+
+/* --------------------------------
+ *		pq_available_bytes	- get number of buffered bytes available for reading.
+ *
+ * --------------------------------
+ */
+int
+pq_available_bytes(void)
+{
+	return pqstate->recv_len - pqstate->recv_offset;
 }
 
 /* --------------------------------
@@ -1041,18 +1113,18 @@ pq_getbyte_if_available(unsigned char *c)
 {
 	int			r;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	if (PqRecvPointer < PqRecvLength)
+	if (pqstate->recv_offset < pqstate->recv_len)
 	{
-		*c = PqRecvBuffer[PqRecvPointer++];
+		*c = pqstate->recv_buf[pqstate->recv_offset++];
 		return 1;
 	}
 
 	/* Put the socket into non-blocking mode */
 	socket_set_nonblocking(true);
 
-	r = secure_read(MyProcPort, c, 1);
+	r = secure_read(pqstate->port, c, 1);
 	if (r < 0)
 	{
 		/*
@@ -1095,20 +1167,20 @@ pq_getbytes(char *s, size_t len)
 {
 	size_t		amount;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	while (len > 0)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
-		amount = PqRecvLength - PqRecvPointer;
+		amount = pqstate->recv_len - pqstate->recv_offset;
 		if (amount > len)
 			amount = len;
-		memcpy(s, PqRecvBuffer + PqRecvPointer, amount);
-		PqRecvPointer += amount;
+		memcpy(s, pqstate->recv_buf + pqstate->recv_offset, amount);
+		pqstate->recv_offset += amount;
 		s += amount;
 		len -= amount;
 	}
@@ -1129,19 +1201,19 @@ pq_discardbytes(size_t len)
 {
 	size_t		amount;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	while (len > 0)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
-		amount = PqRecvLength - PqRecvPointer;
+		amount = pqstate->recv_len - pqstate->recv_offset;
 		if (amount > len)
 			amount = len;
-		PqRecvPointer += amount;
+		pqstate->recv_offset += amount;
 		len -= amount;
 	}
 	return 0;
@@ -1167,35 +1239,35 @@ pq_getstring(StringInfo s)
 {
 	int			i;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	resetStringInfo(s);
 
 	/* Read until we get the terminating '\0' */
 	for (;;)
 	{
-		while (PqRecvPointer >= PqRecvLength)
+		while (pqstate->recv_offset >= pqstate->recv_len)
 		{
 			if (pq_recvbuf())	/* If nothing in buffer, then recv some */
 				return EOF;		/* Failed to recv data */
 		}
 
-		for (i = PqRecvPointer; i < PqRecvLength; i++)
+		for (i = pqstate->recv_offset; i < pqstate->recv_len; i++)
 		{
-			if (PqRecvBuffer[i] == '\0')
+			if (pqstate->recv_buf[i] == '\0')
 			{
 				/* include the '\0' in the copy */
-				appendBinaryStringInfo(s, PqRecvBuffer + PqRecvPointer,
-									   i - PqRecvPointer + 1);
-				PqRecvPointer = i + 1;	/* advance past \0 */
+				appendBinaryStringInfo(s, pqstate->recv_buf + pqstate->recv_offset,
+									   i - pqstate->recv_offset + 1);
+				pqstate->recv_offset = i + 1;	/* advance past \0 */
 				return 0;
 			}
 		}
 
 		/* If we're here we haven't got the \0 in the buffer yet. */
-		appendBinaryStringInfo(s, PqRecvBuffer + PqRecvPointer,
-							   PqRecvLength - PqRecvPointer);
-		PqRecvPointer = PqRecvLength;
+		appendBinaryStringInfo(s, pqstate->recv_buf + pqstate->recv_offset,
+							   pqstate->recv_len - pqstate->recv_offset);
+		pqstate->recv_offset = pqstate->recv_len;
 	}
 }
 
@@ -1213,12 +1285,12 @@ pq_startmsgread(void)
 	 * There shouldn't be a read active already, but let's check just to be
 	 * sure.
 	 */
-	if (PqCommReadingMsg)
+	if (pqstate->is_reading)
 		ereport(FATAL,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
 				 errmsg("terminating connection because protocol synchronization was lost")));
 
-	PqCommReadingMsg = true;
+	pqstate->is_reading = true;
 }
 
 
@@ -1233,9 +1305,9 @@ pq_startmsgread(void)
 void
 pq_endmsgread(void)
 {
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
-	PqCommReadingMsg = false;
+	pqstate->is_reading = false;
 }
 
 /* --------------------------------
@@ -1249,7 +1321,7 @@ pq_endmsgread(void)
 bool
 pq_is_reading_msg(void)
 {
-	return PqCommReadingMsg;
+	return pqstate && pqstate->is_reading;
 }
 
 /* --------------------------------
@@ -1273,7 +1345,7 @@ pq_getmessage(StringInfo s, int maxlen)
 {
 	int32		len;
 
-	Assert(PqCommReadingMsg);
+	Assert(pqstate->is_reading);
 
 	resetStringInfo(s);
 
@@ -1318,7 +1390,7 @@ pq_getmessage(StringInfo s, int maxlen)
 						 errmsg("incomplete message from client")));
 
 			/* we discarded the rest of the message so we're back in sync. */
-			PqCommReadingMsg = false;
+			pqstate->is_reading = false;
 			PG_RE_THROW();
 		}
 		PG_END_TRY();
@@ -1337,7 +1409,7 @@ pq_getmessage(StringInfo s, int maxlen)
 	}
 
 	/* finished reading the message. */
-	PqCommReadingMsg = false;
+	pqstate->is_reading = false;
 
 	return 0;
 }
@@ -1355,13 +1427,13 @@ pq_putbytes(const char *s, size_t len)
 	int			res;
 
 	/* Should only be called by old-style COPY OUT */
-	Assert(DoingCopyOut);
+	Assert(pqstate->is_doing_copyout);
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	res = internal_putbytes(s, len);
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1373,23 +1445,24 @@ internal_putbytes(const char *s, size_t len)
 	while (len > 0)
 	{
 		/* If buffer is full, then flush it out */
-		if (PqSendPointer >= PqSendBufferSize)
+		if (pqstate->send_offset >= pqstate->send_bufsize)
 		{
 			socket_set_nonblocking(false);
 			if (internal_flush())
 				return EOF;
 		}
-		amount = PqSendBufferSize - PqSendPointer;
+		amount = pqstate->send_bufsize - pqstate->send_offset;
 		if (amount > len)
 			amount = len;
-		memcpy(PqSendBuffer + PqSendPointer, s, amount);
-		PqSendPointer += amount;
+		memcpy(pqstate->send_buf + pqstate->send_offset, s, amount);
+		pqstate->send_offset += amount;
 		s += amount;
 		len -= amount;
 	}
 	return 0;
 }
 
+
 /* --------------------------------
  *		socket_flush		- flush pending output
  *
@@ -1401,13 +1474,17 @@ socket_flush(void)
 {
 	int			res;
 
+	if (pqstate->port->sock == PGINVALID_SOCKET)
+		return 0;
+
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+
+	pqstate->is_busy = true;
 	socket_set_nonblocking(false);
 	res = internal_flush();
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1423,14 +1500,14 @@ internal_flush(void)
 {
 	static int	last_reported_send_errno = 0;
 
-	char	   *bufptr = PqSendBuffer + PqSendStart;
-	char	   *bufend = PqSendBuffer + PqSendPointer;
+	char	   *bufptr = pqstate->send_buf + pqstate->send_start;
+	char	   *bufend = pqstate->send_buf + pqstate->send_offset;
 
 	while (bufptr < bufend)
 	{
 		int			r;
 
-		r = secure_write(MyProcPort, bufptr, bufend - bufptr);
+		r = secure_write(pqstate->port, bufptr, bufend - bufptr);
 
 		if (r <= 0)
 		{
@@ -1470,7 +1547,7 @@ internal_flush(void)
 			 * flag that'll cause the next CHECK_FOR_INTERRUPTS to terminate
 			 * the connection.
 			 */
-			PqSendStart = PqSendPointer = 0;
+			pqstate->send_start = pqstate->send_offset = 0;
 			ClientConnectionLost = 1;
 			InterruptPending = 1;
 			return EOF;
@@ -1478,10 +1555,10 @@ internal_flush(void)
 
 		last_reported_send_errno = 0;	/* reset after any successful send */
 		bufptr += r;
-		PqSendStart += r;
+		pqstate->send_start += r;
 	}
 
-	PqSendStart = PqSendPointer = 0;
+	pqstate->send_start = pqstate->send_offset = 0;
 	return 0;
 }
 
@@ -1496,20 +1573,23 @@ socket_flush_if_writable(void)
 {
 	int			res;
 
+	if (pqstate->port->sock == PGINVALID_SOCKET)
+		return 0;
+
 	/* Quick exit if nothing to do */
-	if (PqSendPointer == PqSendStart)
+	if (pqstate->send_offset == pqstate->send_start)
 		return 0;
 
 	/* No-op if reentrant call */
-	if (PqCommBusy)
+	if (pqstate->is_busy)
 		return 0;
 
 	/* Temporarily put the socket into non-blocking mode */
 	socket_set_nonblocking(true);
 
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	res = internal_flush();
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return res;
 }
 
@@ -1520,7 +1600,7 @@ socket_flush_if_writable(void)
 static bool
 socket_is_send_pending(void)
 {
-	return (PqSendStart < PqSendPointer);
+	return (pqstate->send_start < pqstate->send_offset);
 }
 
 /* --------------------------------
@@ -1559,9 +1639,9 @@ socket_is_send_pending(void)
 static int
 socket_putmessage(char msgtype, const char *s, size_t len)
 {
-	if (DoingCopyOut || PqCommBusy)
+	if (pqstate->is_doing_copyout || pqstate->is_busy)
 		return 0;
-	PqCommBusy = true;
+	pqstate->is_busy = true;
 	if (msgtype)
 		if (internal_putbytes(&msgtype, 1))
 			goto fail;
@@ -1575,11 +1655,11 @@ socket_putmessage(char msgtype, const char *s, size_t len)
 	}
 	if (internal_putbytes(s, len))
 		goto fail;
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return 0;
 
 fail:
-	PqCommBusy = false;
+	pqstate->is_busy = false;
 	return EOF;
 }
 
@@ -1599,11 +1679,11 @@ socket_putmessage_noblock(char msgtype, const char *s, size_t len)
 	 * Ensure we have enough space in the output buffer for the message header
 	 * as well as the message itself.
 	 */
-	required = PqSendPointer + 1 + 4 + len;
-	if (required > PqSendBufferSize)
+	required = pqstate->send_offset + 1 + 4 + len;
+	if (required > pqstate->send_bufsize)
 	{
-		PqSendBuffer = repalloc(PqSendBuffer, required);
-		PqSendBufferSize = required;
+		pqstate->send_buf = repalloc(pqstate->send_buf, required);
+		pqstate->send_bufsize = required;
 	}
 	res = pq_putmessage(msgtype, s, len);
 	Assert(res == 0);			/* should not fail when the message fits in
@@ -1619,7 +1699,7 @@ socket_putmessage_noblock(char msgtype, const char *s, size_t len)
 static void
 socket_startcopyout(void)
 {
-	DoingCopyOut = true;
+	pqstate->is_doing_copyout = true;
 }
 
 /* --------------------------------
@@ -1635,12 +1715,12 @@ socket_startcopyout(void)
 static void
 socket_endcopyout(bool errorAbort)
 {
-	if (!DoingCopyOut)
+	if (!pqstate->is_doing_copyout)
 		return;
 	if (errorAbort)
 		pq_putbytes("\n\n\\.\n", 5);
 	/* in non-error case, copy.c will have emitted the terminator line */
-	DoingCopyOut = false;
+	pqstate->is_doing_copyout = false;
 }
 
 /*
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index aba1e92..56ec998 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o send_sock.o $(TAS)
 
 ifeq ($(PORTNAME), win32)
 SUBDIRS += win32
diff --git a/src/backend/port/send_sock.c b/src/backend/port/send_sock.c
new file mode 100644
index 0000000..83c97c5
--- /dev/null
+++ b/src/backend/port/send_sock.c
@@ -0,0 +1,164 @@
+/*-------------------------------------------------------------------------
+ *
+ * send_sock.c
+ *	  Send socket descriptor to another process
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/port/send_sock.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+#ifdef WIN32
+typedef struct
+{
+	SOCKET origsocket;
+	WSAPROTOCOL_INFO wsainfo;
+} InheritableSocket;
+#endif
+
+/*
+ * Send socket descriptor "sock" to backend process through Unix socket "chan"
+ */
+int
+pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid)
+{
+#ifdef WIN32
+	InheritableSocket dst;
+	size_t rc;
+	dst.origsocket = sock;
+	if (WSADuplicateSocket(sock, pid, &dst.wsainfo) != 0)
+	{
+		ereport(FATAL,
+				(errmsg("could not duplicate socket %d for use in backend: error code %d",
+						(int)sock, WSAGetLastError())));
+		return -1;
+	}
+	rc = send(chan, &dst, sizeof(dst), 0);
+	if (rc != sizeof(dst))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to send inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+		return -1;
+	}
+	return 0;
+#else
+	struct msghdr msg = { 0 };
+	struct iovec io;
+	struct cmsghdr * cmsg;
+    char buf[CMSG_SPACE(sizeof(sock))];
+    memset(buf, '\0', sizeof(buf));
+
+    /* On Mac OS X, the struct iovec is needed, even if it points to minimal data */
+    io.iov_base = "";
+	io.iov_len = 1;
+
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+    msg.msg_control = buf;
+    msg.msg_controllen = sizeof(buf);
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    cmsg->cmsg_level = SOL_SOCKET;
+    cmsg->cmsg_type = SCM_RIGHTS;
+    cmsg->cmsg_len = CMSG_LEN(sizeof(sock));
+
+    memcpy(CMSG_DATA(cmsg), &sock, sizeof(sock));
+    msg.msg_controllen = cmsg->cmsg_len;
+
+    while (sendmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+	return 0;
+#endif
+}
+
+
+/*
+ * Receive socket descriptor from postmaster process through Unix socket "chan"
+ */
+pgsocket
+pg_recv_sock(pgsocket chan)
+{
+#ifdef WIN32
+	InheritableSocket src;
+	SOCKET s;
+	size_t rc = recv(chan, &src, sizeof(src), 0);
+	if (rc != sizeof(src))
+	{
+		ereport(FATAL,
+				(errmsg("Failed to receive inheritable socket: rc=%d, error code %d",
+						(int)rc, WSAGetLastError())));
+	}
+	s = WSASocket(FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  FROM_PROTOCOL_INFO,
+				  &src.wsainfo,
+				  0,
+				  0);
+	if (s == INVALID_SOCKET)
+	{
+		ereport(FATAL,
+				(errmsg("could not create inherited socket: error code %d\n",
+						WSAGetLastError())));
+	}
+
+	/*
+	 * To make sure we don't get two references to the same socket, close
+	 * the original one. (This would happen when inheritance actually
+	 * works..
+	 */
+	closesocket(src.origsocket);
+	return s;
+#else
+	struct msghdr msg = {0};
+    char c_buffer[256];
+    char m_buffer[256];
+    struct iovec io;
+	struct cmsghdr * cmsg;
+	pgsocket sock;
+
+    io.iov_base = m_buffer;
+	io.iov_len = sizeof(m_buffer);
+    msg.msg_iov = &io;
+    msg.msg_iovlen = 1;
+
+    msg.msg_control = c_buffer;
+    msg.msg_controllen = sizeof(c_buffer);
+
+    while (recvmsg(chan, &msg, 0) < 0)
+	{
+		if (errno != EINTR)
+			return PGINVALID_SOCKET;
+	}
+
+    cmsg = CMSG_FIRSTHDR(&msg);
+	if (!cmsg)
+		return PGINVALID_SOCKET;
+
+    memcpy(&sock, CMSG_DATA(cmsg), sizeof(sock));
+
+	pg_set_noblock(sock);
+
+    return sock;
+#endif
+}
diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c
index f4356fe..7fd901f 100644
--- a/src/backend/port/win32/socket.c
+++ b/src/backend/port/win32/socket.c
@@ -726,3 +726,65 @@ pgwin32_socket_strerror(int err)
 	}
 	return wserrbuf;
 }
+
+int pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2])
+{
+    union {
+       struct sockaddr_in inaddr;
+       struct sockaddr addr;
+    } a;
+    SOCKET listener;
+    int e;
+    socklen_t addrlen = sizeof(a.inaddr);
+    DWORD flags = 0;
+    int reuse = 1;
+
+    socks[0] = socks[1] = -1;
+
+    listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+    if (listener == -1)
+        return SOCKET_ERROR;
+
+    memset(&a, 0, sizeof(a));
+    a.inaddr.sin_family = AF_INET;
+    a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+    a.inaddr.sin_port = 0;
+
+    for (;;) {
+        if (setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+               (char*) &reuse, (socklen_t) sizeof(reuse)) == -1)
+            break;
+        if  (bind(listener, &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        memset(&a, 0, sizeof(a));
+        if  (getsockname(listener, &a.addr, &addrlen) == SOCKET_ERROR)
+            break;
+        a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+        a.inaddr.sin_family = AF_INET;
+
+        if (listen(listener, 1) == SOCKET_ERROR)
+            break;
+
+        socks[0] = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, flags);
+        if (socks[0] == -1)
+            break;
+        if (connect(socks[0], &a.addr, sizeof(a.inaddr)) == SOCKET_ERROR)
+            break;
+
+        socks[1] = accept(listener, NULL, NULL);
+        if (socks[1] == -1)
+            break;
+
+        closesocket(listener);
+        return 0;
+    }
+
+    e = WSAGetLastError();
+    closesocket(listener);
+    closesocket(socks[0]);
+    closesocket(socks[1]);
+    WSASetLastError(e);
+    socks[0] = socks[1] = -1;
+    return SOCKET_ERROR;
+}
diff --git a/src/backend/postmaster/Makefile b/src/backend/postmaster/Makefile
index 71c2321..b0bd173 100644
--- a/src/backend/postmaster/Makefile
+++ b/src/backend/postmaster/Makefile
@@ -13,6 +13,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 OBJS = autovacuum.o bgworker.o bgwriter.o checkpointer.o fork_process.o \
-	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o
+	pgarch.o pgstat.o postmaster.o startup.o syslogger.o walwriter.o \
+	connpool.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index d2b695e..15b9eb5 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -21,6 +21,7 @@
 #include "port/atomics.h"
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/logicalworker.h"
 #include "storage/dsm.h"
@@ -129,7 +130,10 @@ static const struct
 	},
 	{
 		"ApplyWorkerMain", ApplyWorkerMain
-	}
+	},
+	{
+		"StartupPacketReaderMain", StartupPacketReaderMain
+ 	}
 };
 
 /* Private functions. */
diff --git a/src/backend/postmaster/connpool.c b/src/backend/postmaster/connpool.c
new file mode 100644
index 0000000..1a25055
--- /dev/null
+++ b/src/backend/postmaster/connpool.c
@@ -0,0 +1,276 @@
+/*-------------------------------------------------------------------------
+ * connpool.c
+ *	   PostgreSQL connection pool workers.
+ *
+ * Copyright (c) 2018, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *    src/backend/postmaster/connpool.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <signal.h>
+#include <unistd.h>
+
+#include "lib/stringinfo.h"
+#include "libpq/libpq.h"
+#include "libpq/pqformat.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "postmaster/bgworker.h"
+#include "postmaster/connpool.h"
+#include "postmaster/postmaster.h"
+#include "storage/proc.h"
+#include "utils/memutils.h"
+#include "utils/resowner.h"
+#include "tcop/tcopprot.h"
+
+/*
+ * GUC parameters
+ */
+int			NumConnPoolWorkers = 2;
+
+/*
+ * Global variables
+ */
+ConnPoolWorker	*ConnPoolWorkers;
+
+/*
+ * Signals management
+ */
+static volatile sig_atomic_t shutdown_requested = false;
+static void handle_sigterm(SIGNAL_ARGS);
+
+static void *pqstate;
+
+static void
+handle_sigterm(SIGNAL_ARGS)
+{
+	int save_errno = errno;
+	shutdown_requested = true;
+	SetLatch(&MyProc->procLatch);
+	errno = save_errno;
+}
+
+Size
+ConnPoolShmemSize(void)
+{
+	return MAXALIGN(sizeof(ConnPoolWorker) * NumConnPoolWorkers);
+}
+
+void
+ConnectionPoolWorkersInit(void)
+{
+	int		i;
+	bool	found;
+	Size	size = ConnPoolShmemSize();
+
+	ConnPoolWorkers = ShmemInitStruct("connection pool workers",
+			size, &found);
+
+	if (!found)
+	{
+		MemSet(ConnPoolWorkers, 0, size);
+		for (i = 0; i < NumConnPoolWorkers; i++)
+		{
+			ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+			if (socketpair(AF_UNIX, SOCK_STREAM, 0, worker->pipes) < 0)
+				elog(FATAL, "could not create socket pair for connection pool");
+		}
+	}
+}
+
+/*
+ * Register background workers for startup packet reading.
+ */
+void
+RegisterConnPoolWorkers(void)
+{
+	int					i;
+	BackgroundWorker	bgw;
+
+	if (SessionPoolSize == 0)
+		/* no need to start workers */
+		return;
+
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		memset(&bgw, 0, sizeof(bgw));
+		bgw.bgw_flags = BGWORKER_SHMEM_ACCESS;
+		bgw.bgw_start_time = BgWorkerStart_PostmasterStart;
+		snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
+		snprintf(bgw.bgw_function_name, BGW_MAXLEN, "StartupPacketReaderMain");
+		snprintf(bgw.bgw_name, BGW_MAXLEN,
+				 "connection pool worker %d", i + 1);
+		bgw.bgw_restart_time = 3;
+		bgw.bgw_notify_pid = 0;
+		bgw.bgw_main_arg = (Datum) i;
+
+		RegisterBackgroundWorker(&bgw);
+	}
+
+	elog(LOG, "Connection pool have been started");
+}
+
+static void
+resetWorkerState(ConnPoolWorker *worker, Port *port)
+{
+	/* Cleanup */
+	whereToSendOutput = DestNone;
+	if (port != NULL)
+	{
+		if (port->sock != PGINVALID_SOCKET)
+			closesocket(port->sock);
+		if (port->pqcomm_waitset != NULL)
+			FreeWaitEventSet(port->pqcomm_waitset);
+		port = NULL;
+	}
+	pq_set_current_state(pqstate, NULL, NULL);
+}
+
+void
+StartupPacketReaderMain(Datum arg)
+{
+	sigjmp_buf	local_sigjmp_buf;
+	ConnPoolWorker *worker = &ConnPoolWorkers[(int) arg];
+	MemoryContext	mcxt;
+	int				status;
+	Port		   *port = NULL;
+
+	pqsignal(SIGTERM, handle_sigterm);
+	BackgroundWorkerUnblockSignals();
+
+	mcxt = AllocSetContextCreate(TopMemoryContext,
+								 "temporary context",
+							     ALLOCSET_DEFAULT_SIZES);
+	pqstate = pq_init(TopMemoryContext);
+	worker->pid = MyProcPid;
+	worker->latch = MyLatch;
+	Assert(MyLatch == &MyProc->procLatch);
+
+	MemoryContextSwitchTo(mcxt);
+
+	/* In an exception is encountered, processing resumes here */
+	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
+	{
+		/* Since not using PG_TRY, must reset error stack by hand */
+		error_context_stack = NULL;
+
+		/* Prevent interrupts while cleaning up */
+		HOLD_INTERRUPTS();
+
+		/* Report the error to the server log and to the client */
+		EmitErrorReport();
+
+		/*
+		 * Now return to normal top-level context and clear ErrorContext for
+		 * next time.
+		 */
+		MemoryContextSwitchTo(mcxt);
+		FlushErrorState();
+
+		/*
+		 * We only reset worker state here, but memory will be cleaned
+		 * after next cycle. That's enough for now.
+		 */
+		resetWorkerState(worker, port);
+
+		/* Ready for new sockets */
+		worker->state = CPW_FREE;
+
+		/* Now we can allow interrupts again */
+		RESUME_INTERRUPTS();
+	}
+
+	/* We can now handle ereport(ERROR) */
+	PG_exception_stack = &local_sigjmp_buf;
+
+	while (!shutdown_requested)
+	{
+		ListCell	   *lc;
+		int				rc;
+		StringInfoData	buf;
+
+		rc = WaitLatch(&MyProc->procLatch,
+				WL_LATCH_SET | WL_POSTMASTER_DEATH,
+				0, PG_WAIT_EXTENSION);
+
+		if (rc & WL_POSTMASTER_DEATH)
+			break;
+
+		ResetLatch(&MyProc->procLatch);
+
+		if (shutdown_requested)
+			break;
+
+		if (worker->state != CPW_NEW_SOCKET)
+			/* we woke up for other reason */
+			continue;
+
+		/* Set up temporary pq state for startup packet */
+		port = palloc0(sizeof(Port));
+		port->sock = PGINVALID_SOCKET;
+
+		while (port->sock == PGINVALID_SOCKET)
+			port->sock = pg_recv_sock(worker->pipes[1]);
+
+		/* init pqcomm */
+		port->pqcomm_waitset = pq_create_backend_event_set(mcxt, port, true);
+		port->canAcceptConnections = worker->cac_state;
+		pq_set_current_state(pqstate, port, port->pqcomm_waitset);
+		whereToSendOutput = DestRemote;
+
+		/* TODO: deal with timeouts */
+		status = ProcessStartupPacket(port, false, mcxt, ERROR);
+		if (status != STATUS_OK)
+		{
+			worker->state = CPW_FREE;
+			goto cleanup;
+		}
+
+		/* Serialize a port into stringinfo */
+		pq_beginmessage(&buf, 'P');
+		pq_sendint(&buf, port->proto, 4);
+		pq_sendstring(&buf, port->database_name);
+		pq_sendstring(&buf, port->user_name);
+		pq_sendint(&buf, list_length(port->guc_options), 4);
+
+		foreach(lc, port->guc_options)
+		{
+			char *str = (char *) lfirst(lc);
+			pq_sendstring(&buf, str);
+		}
+
+		if (port->cmdline_options)
+		{
+			pq_sendint(&buf, 1, 4);
+			pq_sendstring(&buf, port->cmdline_options);
+		}
+		else pq_sendint(&buf, 0, 4);
+
+		worker->state = CPW_PROCESSED;
+
+		/* send size of data */
+		while ((rc = send(worker->pipes[1], &buf.len, sizeof(buf.len), 0)) < 0 && errno == EINTR);
+
+		if (rc != (int) sizeof(buf.len))
+			elog(ERROR, "could not send data to postmaster");
+
+		/* send the data */
+		while ((rc = send(worker->pipes[1], buf.data, buf.len, 0)) < 0 && errno == EINTR);
+
+		if (rc != buf.len)
+			elog(ERROR, "could not send data to postmaster");
+
+		pfree(buf.data);
+		buf.data = NULL;
+
+cleanup:
+		resetWorkerState(worker, port);
+		MemoryContextReset(mcxt);
+	}
+
+	resetWorkerState(worker, NULL);
+}
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 8a5b2b3..8bdc988 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -868,7 +868,8 @@ pgstat_report_stat(bool force)
 			PgStat_TableEntry *this_ent;
 
 			/* Shouldn't have any pending transaction-dependent counts */
-			Assert(entry->trans == NULL);
+			if (entry->trans != NULL)
+				continue;
 
 			/*
 			 * Ignore entries that didn't accumulate any actual counts, such
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index a4b53b3..85d6a18 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -76,6 +76,7 @@
 #include <sys/param.h>
 #include <netdb.h>
 #include <limits.h>
+#include <pthread.h>
 
 #ifdef HAVE_SYS_SELECT_H
 #include <sys/select.h>
@@ -114,6 +115,7 @@
 #include "postmaster/pgarch.h"
 #include "postmaster/postmaster.h"
 #include "postmaster/syslogger.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/walsender.h"
 #include "storage/fd.h"
@@ -121,6 +123,7 @@
 #include "storage/pg_shmem.h"
 #include "storage/pmsignal.h"
 #include "storage/proc.h"
+#include "storage/procarray.h"
 #include "tcop/tcopprot.h"
 #include "utils/builtins.h"
 #include "utils/datetime.h"
@@ -150,6 +153,11 @@
 #define BACKEND_TYPE_WORKER		(BACKEND_TYPE_AUTOVAC | BACKEND_TYPE_BGWORKER)
 
 /*
+ * Load average for not backend which is not yet started (used in session schedule for connectoin pooling)
+ */
+#define INIT_BACKEND_LOAD_AVERAGE 10
+
+/*
  * List of active backends (or child processes anyway; we don't actually
  * know whether a given child has become a backend or is still in the
  * authorization phase).  This is used mainly to keep track of how many
@@ -170,6 +178,7 @@ typedef struct bkend
 	pid_t		pid;			/* process id of backend */
 	int32		cancel_key;		/* cancel key for cancels for this backend */
 	int			child_slot;		/* PMChildSlot for this backend, if any */
+	pgsocket    session_send_sock;  /* Write end of socket pipe to this backend used to send session socket descriptor to the backend process */
 
 	/*
 	 * Flavor of backend or auxiliary process.  Note that BACKEND_TYPE_WALSND
@@ -178,8 +187,13 @@ typedef struct bkend
 	 */
 	int			bkend_type;
 	bool		dead_end;		/* is it going to send an error and quit? */
-	bool		bgworker_notify;	/* gets bgworker start/stop notifications */
+	bool		bgworker_notify;/* gets bgworker start/stop notifications */
 	dlist_node	elem;			/* list link in BackendList */
+	int         session_pool_id;/* identifier of backends session pool */
+	int         worker_id;      /* identifier of worker within session pool */
+	void	   *pool;			/* pool of backends */
+	PGPROC     *proc;           /* PGPROC entry for this backend */
+	uint64      n_sessions;     /* number of session scheduled to this backend */
 } Backend;
 
 static dlist_head BackendList = DLIST_STATIC_INIT(BackendList);
@@ -190,7 +204,27 @@ static Backend *ShmemBackendArray;
 
 BackgroundWorker *MyBgworkerEntry = NULL;
 
+struct DatabasePoolKey {
+	char database[NAMEDATALEN];
+	char username[NAMEDATALEN];
+};
 
+typedef struct DatabasePool
+{
+	struct DatabasePoolKey key;
+
+	Backend	  **workers;	/* pool backends */
+	int			n_workers;	/* number of launched worker backends
+							   in this pool so far */
+	int			rr_index;	/* index of current backends used to implement
+							 * round-robin distribution of sessions through
+							 * backends */
+} DatabasePool;
+
+static struct
+{
+	HTAB			   *pools;
+} PostmasterSessionPool;
 
 /* The socket number we are listening for connections on */
 int			PostPortNumber;
@@ -214,7 +248,7 @@ int			ReservedBackends;
 
 /* The socket(s) we're listening to. */
 #define MAXLISTEN	64
-static pgsocket ListenSocket[MAXLISTEN];
+static pgsocket ListenSocket[MAXLISTEN + MAX_CONNPOOL_WORKERS];
 
 /*
  * Set by the -o option
@@ -393,15 +427,19 @@ static void unlink_external_pid_file(int status, Datum arg);
 static void getInstallationPaths(const char *argv0);
 static void checkControlFile(void);
 static Port *ConnCreate(int serverFd);
+static Port *PoolConnCreate(pgsocket poolFd, int workerId);
 static void ConnFree(Port *port);
+static void ConnDispatch(Port *port);
 static void reset_shared(int port);
 static void SIGHUP_handler(SIGNAL_ARGS);
+static CAC_state canAcceptConnections(void);
 static void pmdie(SIGNAL_ARGS);
 static void reaper(SIGNAL_ARGS);
 static void sigusr1_handler(SIGNAL_ARGS);
 static void startup_die(SIGNAL_ARGS);
 static void dummy_handler(SIGNAL_ARGS);
 static void StartupPacketTimeoutHandler(void);
+static int BackendStartup(DatabasePool *pool, Port *port);
 static void CleanupBackend(int pid, int exitstatus);
 static bool CleanupBackgroundWorker(int pid, int exitstatus);
 static void HandleChildCrash(int pid, int exitstatus, const char *procname);
@@ -412,13 +450,11 @@ static void BackendInitialize(Port *port);
 static void BackendRun(Port *port) pg_attribute_noreturn();
 static void ExitPostmaster(int status) pg_attribute_noreturn();
 static int	ServerLoop(void);
-static int	BackendStartup(Port *port);
-static int	ProcessStartupPacket(Port *port, bool SSLdone);
 static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options);
 static void processCancelRequest(Port *port, void *pkt);
 static int	initMasks(fd_set *rmask);
+static void report_postmaster_failure_to_client(Port *port, char const* errmsg);
 static void report_fork_failure_to_client(Port *port, int errnum);
-static CAC_state canAcceptConnections(void);
 static bool RandomCancelKey(int32 *cancel_key);
 static void signal_child(pid_t pid, int signal);
 static bool SignalSomeChildren(int signal, int targets);
@@ -486,6 +522,7 @@ typedef struct
 {
 	Port		port;
 	InheritableSocket portsocket;
+	InheritableSocket sessionsocket;
 	char		DataDir[MAXPGPATH];
 	pgsocket	ListenSocket[MAXLISTEN];
 	int32		MyCancelKey;
@@ -988,6 +1025,11 @@ PostmasterMain(int argc, char *argv[])
 	ApplyLauncherRegister();
 
 	/*
+	 * Register connnection pool workers
+	 */
+	RegisterConnPoolWorkers();
+
+	/*
 	 * process any libraries that should be preloaded at postmaster start
 	 */
 	process_shared_preload_libraries();
@@ -1613,6 +1655,177 @@ DetermineSleepTime(struct timeval *timeout)
 	}
 }
 
+static bool
+IsDedicatedDatabase(char const* dbname)
+{
+	List       *namelist;
+	ListCell   *l;
+	char       *databases;
+	bool       found = false;
+
+    /* Need a modifiable copy of namespace_search_path string */
+	databases = pstrdup(DedicatedDatabases);
+
+	if (!SplitIdentifierString(databases, ',', &namelist)) {
+		elog(ERROR, "invalid list syntax");
+	}
+	foreach(l, namelist)
+	{
+		char *curname = (char *) lfirst(l);
+		if (strcmp(curname, dbname) == 0)
+		{
+			found = true;
+			break;
+		}
+	}
+	list_free(namelist);
+	pfree(databases);
+
+	return found;
+}
+
+/*
+ * Find free worker and send socket
+ */
+static void
+SendPortToConnectionPool(Port *port)
+{
+	int		i;
+	bool	sent;
+
+	/* By default is not dedicated */
+	IsDedicatedBackend = false;
+
+	sent = false;
+
+again:
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+		if (worker->pid == 0)
+			continue;
+
+		if (worker->state == CPW_PROCESSED)
+		{
+			Port *conn = PoolConnCreate(worker->pipes[0], i);
+			if (conn)
+				ConnDispatch(conn);
+		}
+		if (worker->state == CPW_FREE)
+		{
+			worker->port = port;
+			worker->state = CPW_NEW_SOCKET;
+			worker->cac_state = canAcceptConnections();
+
+			if (pg_send_sock(worker->pipes[0], port->sock, worker->pid) < 0)
+			{
+				elog(LOG, "could not send socket to connection pool: %m");
+				ExitPostmaster(1);
+			}
+			SetLatch(worker->latch);
+			sent = true;
+			break;
+		}
+	}
+
+	if (!sent)
+	{
+		pg_usleep(1000L);
+		goto again;
+	}
+}
+
+static void
+ConnDispatch(Port *port)
+{
+	bool			found;
+	DatabasePool   *pool;
+	struct DatabasePoolKey	key;
+
+	Assert(port->sock != PGINVALID_SOCKET);
+	if (IsDedicatedDatabase(port->database_name))
+	{
+		IsDedicatedBackend = true;
+		BackendStartup(NULL, port);
+		goto cleanup;
+	}
+
+#ifdef USE_SSL
+	if (port->ssl_in_use)
+	{
+		/*
+		 * We don't (yet) support SSL connections with connection pool,
+		 * since we need to move whole SSL context to already working
+		 * backend. This task needs more investigation.
+		 */
+		elog(ERROR, "connection pool does not support SSL connections");
+		goto cleanup;
+	}
+#endif
+	MemSet(key.database, 0, NAMEDATALEN);
+	MemSet(key.username, 0, NAMEDATALEN);
+
+	strlcpy(key.database, port->database_name, NAMEDATALEN);
+	strlcpy(key.username, port->user_name, NAMEDATALEN);
+
+	pool = hash_search(PostmasterSessionPool.pools, &key, HASH_ENTER, &found);
+	if (!found)
+	{
+		pool->key = key;
+		pool->workers = NULL;
+		pool->n_workers = 0;
+		pool->rr_index = 0;
+	}
+
+	BackendStartup(pool, port);
+
+cleanup:
+	/*
+	 * We no longer need the open socket or port structure
+	 * in this process
+	 */
+	StreamClose(port->sock);
+	ConnFree(port);
+}
+
+/*
+ * Init wait event set for connection pool workers,
+ * and hash table for backends in pool.
+ */
+static int
+InitConnPoolState(fd_set *rmask, int numSockets)
+{
+	int			i;
+	HASHCTL		ctl;
+
+	/*
+	 * create hashtable that indexes the relcache
+	 */
+	MemSet(&ctl, 0, sizeof(ctl));
+	ctl.keysize = sizeof(struct DatabasePoolKey);
+	ctl.entrysize = sizeof(DatabasePool);
+	ctl.hcxt = PostmasterContext;
+	PostmasterSessionPool.pools = hash_create("Pool by database and user", 100,
+								  &ctl, HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
+	for (i = 0; i < NumConnPoolWorkers; i++)
+	{
+		ConnPoolWorker	*worker = &ConnPoolWorkers[i];
+		worker->port = NULL;
+
+		/*
+		 * we use same pselect(3) call for connection pool workers and
+		 * clients
+		 */
+		ListenSocket[MAXLISTEN + i] = worker->pipes[0];
+		FD_SET(worker->pipes[0], rmask);
+		if (worker->pipes[0] > numSockets)
+			numSockets = worker->pipes[0];
+	}
+
+	return numSockets + 1;
+}
+
 /*
  * Main idle loop of postmaster
  *
@@ -1630,6 +1843,9 @@ ServerLoop(void)
 
 	nSockets = initMasks(&readmask);
 
+	if (SessionPoolSize > 0)
+		nSockets = InitConnPoolState(&readmask, nSockets);
+
 	for (;;)
 	{
 		fd_set		rmask;
@@ -1690,27 +1906,43 @@ ServerLoop(void)
 		 */
 		if (selres > 0)
 		{
+			Port	   *port;
 			int			i;
 
+			/* Check for client connections */
 			for (i = 0; i < MAXLISTEN; i++)
 			{
 				if (ListenSocket[i] == PGINVALID_SOCKET)
 					break;
 				if (FD_ISSET(ListenSocket[i], &rmask))
 				{
-					Port	   *port;
-
 					port = ConnCreate(ListenSocket[i]);
 					if (port)
 					{
-						BackendStartup(port);
-
-						/*
-						 * We no longer need the open socket or port structure
-						 * in this process
-						 */
-						StreamClose(port->sock);
-						ConnFree(port);
+						if (SessionPoolSize == 0)
+						{
+							IsDedicatedBackend = true;
+							BackendStartup(NULL, port);
+							StreamClose(port->sock);
+							ConnFree(port);
+						}
+						else
+							SendPortToConnectionPool(port);
+					}
+				}
+			}
+
+			/* Check for some data from connections pool */
+			if (SessionPoolSize > 0)
+			{
+				for (i = 0; i < NumConnPoolWorkers; i++)
+				{
+					if (FD_ISSET(ListenSocket[MAXLISTEN + i], &rmask))
+					{
+						port = PoolConnCreate(ListenSocket[MAXLISTEN + i], i);
+						if (port)
+							ConnDispatch(port);
+
 					}
 				}
 			}
@@ -1893,13 +2125,15 @@ initMasks(fd_set *rmask)
  * send anything to the client, which would typically be appropriate
  * if we detect a communications failure.)
  */
-static int
-ProcessStartupPacket(Port *port, bool SSLdone)
+int
+ProcessStartupPacket(Port *port, bool SSLdone, MemoryContext memctx,
+						int errlevel)
 {
 	int32		len;
 	void	   *buf;
 	ProtocolVersion proto;
-	MemoryContext oldcontext;
+	MemoryContext oldcontext = MemoryContextSwitchTo(memctx);
+	int			result;
 
 	pq_startmsgread();
 	if (pq_getbytes((char *) &len, 4) == EOF)
@@ -1992,7 +2226,7 @@ retry1:
 #endif
 		/* regular startup packet, cancel, etc packet should follow... */
 		/* but not another SSL negotiation request */
-		return ProcessStartupPacket(port, true);
+		return ProcessStartupPacket(port, true, memctx, errlevel);
 	}
 
 	/* Could add additional special packet types here */
@@ -2006,13 +2240,16 @@ retry1:
 	/* Check that the major protocol version is in range. */
 	if (PG_PROTOCOL_MAJOR(proto) < PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST) ||
 		PG_PROTOCOL_MAJOR(proto) > PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST))
-		ereport(FATAL,
+	{
+		ereport(errlevel,
 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				 errmsg("unsupported frontend protocol %u.%u: server supports %u.0 to %u.%u",
 						PG_PROTOCOL_MAJOR(proto), PG_PROTOCOL_MINOR(proto),
 						PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST),
 						PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST),
 						PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST))));
+		return STATUS_ERROR;
+	}
 
 	/*
 	 * Now fetch parameters out of startup packet and save them into the Port
@@ -2022,7 +2259,7 @@ retry1:
 	 * not worry about leaking this storage on failure, since we aren't in the
 	 * postmaster process anymore.
 	 */
-	oldcontext = MemoryContextSwitchTo(TopMemoryContext);
+	oldcontext = MemoryContextSwitchTo(memctx);
 
 	if (PG_PROTOCOL_MAJOR(proto) >= 3)
 	{
@@ -2070,12 +2307,15 @@ retry1:
 					am_db_walsender = true;
 				}
 				else if (!parse_bool(valptr, &am_walsender))
-					ereport(FATAL,
+				{
+					ereport(errlevel,
 							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 							 errmsg("invalid value for parameter \"%s\": \"%s\"",
 									"replication",
 									valptr),
 							 errhint("Valid values are: \"false\", 0, \"true\", 1, \"database\".")));
+					return STATUS_ERROR;
+				}
 			}
 			else if (strncmp(nameptr, "_pq_.", 5) == 0)
 			{
@@ -2103,9 +2343,12 @@ retry1:
 		 * given packet length, complain.
 		 */
 		if (offset != len - 1)
-			ereport(FATAL,
+		{
+			ereport(errlevel,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("invalid startup packet layout: expected terminator as last byte")));
+			return STATUS_ERROR;
+		}
 
 		/*
 		 * If the client requested a newer protocol version or if the client
@@ -2141,9 +2384,12 @@ retry1:
 
 	/* Check a user name was given. */
 	if (port->user_name == NULL || port->user_name[0] == '\0')
-		ereport(FATAL,
+	{
+		ereport(errlevel,
 				(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
 				 errmsg("no PostgreSQL user name specified in startup packet")));
+		return STATUS_ERROR;
+	}
 
 	/* The database defaults to the user name. */
 	if (port->database_name == NULL || port->database_name[0] == '\0')
@@ -2197,27 +2443,32 @@ retry1:
 	 * now instead of wasting cycles on an authentication exchange. (This also
 	 * allows a pg_ping utility to be written.)
 	 */
+	result = STATUS_OK;
 	switch (port->canAcceptConnections)
 	{
 		case CAC_STARTUP:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is starting up")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_SHUTDOWN:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is shutting down")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_RECOVERY:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_CANNOT_CONNECT_NOW),
 					 errmsg("the database system is in recovery mode")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_TOOMANY:
-			ereport(FATAL,
+			ereport(errlevel,
 					(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
 					 errmsg("sorry, too many clients already")));
+			result = STATUS_ERROR;
 			break;
 		case CAC_WAITBACKUP:
 			/* OK for now, will check in InitPostgres */
@@ -2226,7 +2477,7 @@ retry1:
 			break;
 	}
 
-	return STATUS_OK;
+	return result;
 }
 
 /*
@@ -2322,7 +2573,7 @@ processCancelRequest(Port *port, void *pkt)
 /*
  * canAcceptConnections --- check to see if database state allows connections.
  */
-static CAC_state
+CAC_state
 canAcceptConnections(void)
 {
 	CAC_state	result = CAC_OK;
@@ -2398,7 +2649,7 @@ ConnCreate(int serverFd)
 		ConnFree(port);
 		return NULL;
 	}
-
+	SessionPoolSock = PGINVALID_SOCKET;
 	/*
 	 * Allocate GSSAPI specific state struct
 	 */
@@ -2418,6 +2669,69 @@ ConnCreate(int serverFd)
 	return port;
 }
 
+#define CONN_BUF_SIZE 8192
+
+static Port *
+PoolConnCreate(pgsocket poolFd, int workerId)
+{
+	char				recv_buf[CONN_BUF_SIZE];
+	int					recv_len = 0,
+						i,
+						rc,
+						offs,
+						len;
+	StringInfoData		buf;
+	ConnPoolWorker	   *worker = &ConnPoolWorkers[workerId];
+	Port			   *port = worker->port;
+
+	if (worker->state != CPW_PROCESSED)
+		return NULL;
+
+	/* In any case we should free the worker */
+	worker->port = NULL;
+	worker->state = CPW_FREE;
+
+	/* get size of data */
+	while ((rc = read(poolFd, &recv_len, sizeof recv_len)) < 0 && errno == EINTR);
+
+	if (rc != (int) sizeof(recv_len))
+		goto io_error;
+
+	/* get the data */
+	for (offs = 0; offs < recv_len; offs += rc)
+	{
+		while ((rc = read(poolFd, recv_buf + offs, CONN_BUF_SIZE - offs)) < 0 && errno == EINTR);
+		if (rc <= 0)
+			goto io_error;
+	}
+
+	buf.cursor = 0;
+	buf.data = recv_buf;
+	buf.len = recv_len;
+
+	port->proto = pq_getmsgint(&buf, 4);
+	port->database_name = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+	port->user_name = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+	port->guc_options = NIL;
+
+	/* GUC */
+	len = pq_getmsgint(&buf, 4);
+	for (i = 0; i < len; i++)
+	{
+		char	*val = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+		port->guc_options = lappend(port->guc_options, val);
+	}
+
+	if (pq_getmsgint(&buf, 4) > 0)
+		port->cmdline_options = MemoryContextStrdup(TopMemoryContext, pq_getmsgstring(&buf));
+
+	return port;
+
+io_error:
+	StreamClose(port->sock);
+	ConnFree(port);
+	return NULL;
+}
 
 /*
  * ConnFree -- free a local connection data structure
@@ -2430,6 +2744,12 @@ ConnFree(Port *conn)
 #endif
 	if (conn->gss)
 		free(conn->gss);
+	if (conn->database_name)
+		pfree(conn->database_name);
+	if (conn->user_name)
+		pfree(conn->user_name);
+	if (conn->cmdline_options)
+		pfree(conn->cmdline_options);
 	free(conn);
 }
 
@@ -3185,6 +3505,44 @@ CleanupBackgroundWorker(int pid,
 }
 
 /*
+ * Unlink backend from backend's list and free memory.
+ */
+static void
+UnlinkPooledBackend(Backend *bp)
+{
+	DatabasePool	*pool = bp->pool;
+
+	if (!pool ||
+		bp->bkend_type != BACKEND_TYPE_NORMAL ||
+		bp->session_send_sock == PGINVALID_SOCKET)
+		return;
+
+	Assert(pool->n_workers > bp->worker_id &&
+		   pool->workers[bp->worker_id] == bp);
+
+	if (--pool->n_workers != 0)
+	{
+		pool->workers[bp->worker_id] = pool->workers[pool->n_workers];
+		pool->workers[bp->worker_id]->worker_id = bp->worker_id;
+		pool->rr_index %= pool->n_workers;
+	}
+
+	closesocket(bp->session_send_sock);
+	bp->session_send_sock = PGINVALID_SOCKET;
+
+	elog(DEBUG2, "Cleanup backend %d", bp->pid);
+}
+
+static void
+DeleteBackend(Backend *bp)
+{
+	UnlinkPooledBackend(bp);
+
+	dlist_delete(&bp->elem);
+	free(bp);
+}
+
+/*
  * CleanupBackend -- cleanup after terminated backend.
  *
  * Remove all local state associated with backend.
@@ -3261,8 +3619,7 @@ CleanupBackend(int pid,
 				 */
 				BackgroundWorkerStopNotifications(bp->pid);
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			DeleteBackend(bp);
 			break;
 		}
 	}
@@ -3364,8 +3721,7 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)
 				ShmemBackendArrayRemove(bp);
 #endif
 			}
-			dlist_delete(iter.cur);
-			free(bp);
+			DeleteBackend(bp);
 			/* Keep looping so we can signal remaining backends */
 		}
 		else
@@ -3955,6 +4311,118 @@ TerminateChildren(int signal)
 }
 
 /*
+ * Try to report error to client.
+ * Since we do not care to risk blocking the postmaster on
+ * this connection, we set the connection to non-blocking and try only once.
+ *
+ * This is grungy special-purpose code; we cannot use backend libpq since
+ * it's not up and running.
+ */
+static void
+report_postmaster_failure_to_client(Port *port, char const* errmsg)
+{
+	int rc;
+
+	/* Set port to non-blocking.  Don't do send() if this fails */
+	if (!pg_set_noblock(port->sock))
+		return;
+
+	/* We'll retry after EINTR, but ignore all other failures */
+	do
+	{
+		rc = send(port->sock, errmsg, strlen(errmsg) + 1, 0);
+	} while (rc < 0 && errno == EINTR);
+
+	elog(DEBUG1, "Send postmaster failure to client: %d", rc);
+}
+
+typedef struct
+{
+	Backend* worker;
+	double   load_average;
+} WorkerState;
+
+static int
+compareWorkerLoadAverage(void const* p, void const* q)
+{
+	WorkerState* ws1 = (WorkerState*)p;
+	WorkerState* ws2 = (WorkerState*)q;
+	return ws1->load_average < ws2->load_average ? -1 : ws1->load_average == ws2->load_average ? 0 : 1;
+}
+
+static int
+ScheduleSession(DatabasePool *pool, Port *port)
+{
+	int i, j;
+	int n_workers = pool->n_workers;
+	WorkerState ws[MAX_CONNPOOL_WORKERS];
+
+	for (i = 0; i < n_workers; i++)
+	{
+		Backend *worker;
+		switch (SessionSchedule)
+		{
+		  case SESSION_SCHED_RANDOM:
+			worker = pool->workers[random() % n_workers];
+			break;
+		  case SESSION_SCHED_ROUND_ROBIN:
+			worker = pool->workers[pool->rr_index];
+			pool->rr_index = (pool->rr_index + 1) % n_workers; /* round-robin */
+			break;
+		  case SESSION_SCHED_LOAD_BALANCING:
+			if (i == 0)
+			{
+				for (j = 0; j < n_workers; j++)
+				{
+					worker = pool->workers[j];
+					if (!worker->proc)
+						worker->proc = BackendPidGetProc(worker->pid);
+					ws[j].worker = worker;
+					ws[j].load_average = (worker->proc && worker->proc->nSessionSchedules > 0)
+						? (double)worker->proc->nReadySessions / worker->proc->nSessionSchedules
+						: INIT_BACKEND_LOAD_AVERAGE;
+				}
+				qsort(ws, n_workers, sizeof(WorkerState), compareWorkerLoadAverage);
+			}
+			worker = ws[i].worker;
+			break;
+		  default:
+			Assert(false);
+		}
+		if (!worker->proc)
+			worker->proc = BackendPidGetProc(worker->pid);
+
+		if (worker->proc && worker->n_sessions - worker->proc->nFinishedSessions >= MaxSessions)
+		{
+			elog(LOG, "Worker %d reaches max session limit %d", worker->pid, MaxSessions);
+			continue;
+	    }
+		/* Send connection socket to the worker backend */
+		if (pg_send_sock(worker->session_send_sock, port->sock, worker->pid) < 0)
+		{
+			elog(LOG, "Failed to send session socket %d: %m",
+				 worker->session_send_sock);
+			UnlinkPooledBackend(worker);
+			n_workers -= 1;
+			i = -1; /* restart loop from very beginning */
+			continue;
+		}
+		worker->n_sessions += 1;
+		elog(DEBUG2, "Start new session for socket %d at backend %d",
+			 port->sock, worker->pid);
+
+		/* TODO: serialize the port and send it through socket */
+		return STATUS_OK;
+	}
+	ereport(LOG,
+			(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
+			 errmsg("sorry, too many open sessions for connection pool %s/%s",
+					pool->key.database, pool->key.username)));
+	report_postmaster_failure_to_client(port, "ESorry, too many open sessions\n");
+	return STATUS_ERROR;
+}
+
+/*
  * BackendStartup -- start backend process
  *
  * returns: STATUS_ERROR if the fork failed, STATUS_OK otherwise.
@@ -3962,16 +4430,24 @@ TerminateChildren(int signal)
  * Note: if you change this code, also consider StartAutovacuumWorker.
  */
 static int
-BackendStartup(Port *port)
+BackendStartup(DatabasePool *pool, Port *port)
 {
 	Backend    *bn;				/* for backend cleanup */
 	pid_t		pid;
+	pgsocket    session_pipe[2];
+
+	/*
+	 * In case of session pooling instead of spawning new backend open
+	 * new session at one of the existed backends.
+	 */
+	if (pool && pool->n_workers >= SessionPoolSize)
+		return ScheduleSession(pool, port);
 
 	/*
 	 * Create backend data structure.  Better before the fork() so we can
 	 * handle failure cleanly.
 	 */
-	bn = (Backend *) malloc(sizeof(Backend));
+	bn = (Backend *) calloc(1, sizeof(Backend));
 	if (!bn)
 	{
 		ereport(LOG,
@@ -3979,6 +4455,7 @@ BackendStartup(Port *port)
 				 errmsg("out of memory")));
 		return STATUS_ERROR;
 	}
+	bn->n_sessions = 1;
 
 	/*
 	 * Compute the cancel key that will be assigned to this backend. The
@@ -4012,12 +4489,30 @@ BackendStartup(Port *port)
 	/* Hasn't asked to be notified about any bgworkers yet */
 	bn->bgworker_notify = false;
 
+	/* Create socket pair for sending session sockets to the backend */
+	if (!IsDedicatedBackend)
+	{
+		if (socketpair(AF_UNIX, SOCK_STREAM, 0, session_pipe) < 0)
+			ereport(FATAL,
+					(errcode_for_file_access(),
+					 errmsg_internal("could not create socket pair for launching sessions: %m")));
+#ifdef WIN32
+		SessionPoolSock = session_pipe[0];
+#endif
+	}
 #ifdef EXEC_BACKEND
 	pid = backend_forkexec(port);
 #else							/* !EXEC_BACKEND */
 	pid = fork_process();
 	if (pid == 0)				/* child */
 	{
+		whereToSendOutput = DestNone;
+
+		if (!IsDedicatedBackend)
+		{
+			SessionPoolSock = session_pipe[0]; /* Use this socket for receiving client session socket descriptor */
+			close(session_pipe[1]); /* Close unused end of the pipe */
+		}
 		free(bn);
 
 		/* Detangle from postmaster */
@@ -4026,11 +4521,14 @@ BackendStartup(Port *port)
 		/* Close the postmaster's sockets */
 		ClosePostmasterPorts(false);
 
-		/* Perform additional initialization and collect startup packet */
+		/* Perform additional initialization */
 		BackendInitialize(port);
 
 		/* And run the backend */
 		BackendRun(port);
+
+		/* Unreachable */
+		Assert(false);
 	}
 #endif							/* EXEC_BACKEND */
 
@@ -4041,6 +4539,7 @@ BackendStartup(Port *port)
 
 		if (!bn->dead_end)
 			(void) ReleasePostmasterChildSlot(bn->child_slot);
+
 		free(bn);
 		errno = save_errno;
 		ereport(LOG,
@@ -4059,9 +4558,27 @@ BackendStartup(Port *port)
 	 * of backends.
 	 */
 	bn->pid = pid;
+	bn->session_send_sock = PGINVALID_SOCKET;
 	bn->bkend_type = BACKEND_TYPE_NORMAL;	/* Can change later to WALSND */
+	bn->pool = pool;
 	dlist_push_head(&BackendList, &bn->elem);
 
+	if (!IsDedicatedBackend)
+	{
+		/* Use this socket for sending client session socket descriptor */
+		bn->session_send_sock = session_pipe[1];
+
+		/* Close unused end of the pipe */
+		closesocket(session_pipe[0]);
+
+		if (pool->workers == NULL)
+			pool->workers = (Backend **) calloc(sizeof(Backend *), SessionPoolSize);
+
+		bn->worker_id = pool->n_workers++;
+		pool->workers[bn->worker_id] = bn;
+
+		elog(DEBUG1, "Start %d-th worker with pid %d", pool->n_workers, pid);
+	}
 #ifdef EXEC_BACKEND
 	if (!bn->dead_end)
 		ShmemBackendArrayAdd(bn);
@@ -4082,22 +4599,13 @@ static void
 report_fork_failure_to_client(Port *port, int errnum)
 {
 	char		buffer[1000];
-	int			rc;
 
 	/* Format the error message packet (always V2 protocol) */
 	snprintf(buffer, sizeof(buffer), "E%s%s\n",
 			 _("could not fork new process for connection: "),
 			 strerror(errnum));
 
-	/* Set port to non-blocking.  Don't do send() if this fails */
-	if (!pg_set_noblock(port->sock))
-		return;
-
-	/* We'll retry after EINTR, but ignore all other failures */
-	do
-	{
-		rc = send(port->sock, buffer, strlen(buffer) + 1, 0);
-	} while (rc < 0 && errno == EINTR);
+	report_postmaster_failure_to_client(port, buffer);
 }
 
 
@@ -4122,6 +4630,7 @@ BackendInitialize(Port *port)
 
 	/* Save port etc. for ps status */
 	MyProcPort = port;
+	FrontendProtocol = port->proto;
 
 	/*
 	 * PreAuthDelay is a debugging aid for investigating problems in the
@@ -4148,7 +4657,10 @@ BackendInitialize(Port *port)
 	 * Initialize libpq and enable reporting of ereport errors to the client.
 	 * Must do this now because authentication uses libpq to send messages.
 	 */
-	pq_init();					/* initialize libpq to talk to client */
+	port->pqcomm_state = pq_init(TopMemoryContext);   /* initialize libpq to talk to client */
+	port->pqcomm_waitset = pq_create_backend_event_set(TopMemoryContext, port, false);
+	pq_set_current_state(port->pqcomm_state, port, port->pqcomm_waitset);
+
 	whereToSendOutput = DestRemote; /* now safe to ereport to client */
 
 	/*
@@ -4227,35 +4739,46 @@ BackendInitialize(Port *port)
 		port->remote_hostname = strdup(remote_host);
 
 	/*
-	 * Ready to begin client interaction.  We will give up and exit(1) after a
-	 * time delay, so that a broken client can't hog a connection
-	 * indefinitely.  PreAuthDelay and any DNS interactions above don't count
-	 * against the time limit.
-	 *
-	 * Note: AuthenticationTimeout is applied here while waiting for the
-	 * startup packet, and then again in InitPostgres for the duration of any
-	 * authentication operations.  So a hostile client could tie up the
-	 * process for nearly twice AuthenticationTimeout before we kick him off.
-	 *
-	 * Note: because PostgresMain will call InitializeTimeouts again, the
-	 * registration of STARTUP_PACKET_TIMEOUT will be lost.  This is okay
-	 * since we never use it again after this function.
+	 * Read startup backend only if we don't use session pool
 	 */
-	RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
-	enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
+	if (IsDedicatedBackend && !port->proto)
+	{
+		/*
+		 * Ready to begin client interaction.  We will give up and exit(1) after a
+		 * time delay, so that a broken client can't hog a connection
+		 * indefinitely.  PreAuthDelay and any DNS interactions above don't count
+		 * against the time limit.
+		 *
+		 * Note: AuthenticationTimeout is applied here while waiting for the
+		 * startup packet, and then again in InitPostgres for the duration of any
+		 * authentication operations.  So a hostile client could tie up the
+		 * process for nearly twice AuthenticationTimeout before we kick him off.
+		 *
+		 * Note: because PostgresMain will call InitializeTimeouts again, the
+		 * registration of STARTUP_PACKET_TIMEOUT will be lost.  This is okay
+		 * since we never use it again after this function.
+		 */
+		RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
+		enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
 
-	/*
-	 * Receive the startup packet (which might turn out to be a cancel request
-	 * packet).
-	 */
-	status = ProcessStartupPacket(port, false);
+		/*
+		 * Receive the startup packet (which might turn out to be a cancel request
+		 * packet).
+		 */
+		status = ProcessStartupPacket(port, false, TopMemoryContext, FATAL);
 
-	/*
-	 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
-	 * already did any appropriate error reporting.
-	 */
-	if (status != STATUS_OK)
-		proc_exit(0);
+		/*
+		 * Stop here if it was bad or a cancel packet.  ProcessStartupPacket
+		 * already did any appropriate error reporting.
+		 */
+		if (status != STATUS_OK)
+			proc_exit(0);
+
+		/*
+		 * Disable the timeout
+		 */
+		disable_timeout(STARTUP_PACKET_TIMEOUT, false);
+	}
 
 	/*
 	 * Now that we have the user and database name, we can set the process
@@ -4277,9 +4800,8 @@ BackendInitialize(Port *port)
 						update_process_title ? "authentication" : "");
 
 	/*
-	 * Disable the timeout, and prevent SIGTERM/SIGQUIT again.
+	 * Prevent SIGTERM/SIGQUIT again.
 	 */
-	disable_timeout(STARTUP_PACKET_TIMEOUT, false);
 	PG_SETMASK(&BlockSig);
 }
 
@@ -5990,6 +6512,9 @@ save_backend_variables(BackendParameters *param, Port *port,
 	if (!write_inheritable_socket(&param->portsocket, port->sock, childPid))
 		return false;
 
+	if (!write_inheritable_socket(&param->sessionsocket, SessionPoolSock, childPid))
+		return false;
+
 	strlcpy(param->DataDir, DataDir, MAXPGPATH);
 
 	memcpy(&param->ListenSocket, &ListenSocket, sizeof(ListenSocket));
@@ -6222,6 +6747,7 @@ restore_backend_variables(BackendParameters *param, Port *port)
 {
 	memcpy(port, &param->port, sizeof(Port));
 	read_inheritable_socket(&port->sock, &param->portsocket);
+	read_inheritable_socket(&SessionPoolSock, &param->sessionsocket);
 
 	SetDataDir(param->DataDir);
 
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index 8dd51f1..98b2722 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -116,7 +116,7 @@
  * the number of open files.  (This appears to be true on most if not
  * all platforms as of Feb 2004.)
  */
-#define NUM_RESERVED_FDS		10
+#define NUM_RESERVED_FDS		20
 
 /*
  * If we have fewer than this many usable FDs after allowing for the reserved
@@ -276,7 +276,6 @@ static int	nextTempTableSpace = 0;
  * Insert		   - put a file at the front of the Lru ring
  * LruInsert	   - put a file at the front of the Lru ring and open it
  * ReleaseLruFile  - Release an fd by closing the last entry in the Lru ring
- * ReleaseLruFiles - Release fd(s) until we're under the max_safe_fds limit
  * AllocateVfd	   - grab a free (or new) file record (from VfdArray)
  * FreeVfd		   - free a file record
  *
@@ -304,7 +303,6 @@ static void LruDelete(File file);
 static void Insert(File file);
 static int	LruInsert(File file);
 static bool ReleaseLruFile(void);
-static void ReleaseLruFiles(void);
 static File AllocateVfd(void);
 static void FreeVfd(File file);
 
@@ -1176,7 +1174,7 @@ ReleaseLruFile(void)
  * Release kernel FDs as needed to get under the max_safe_fds limit.
  * After calling this, it's OK to try to open another file.
  */
-static void
+void
 ReleaseLruFiles(void)
 {
 	while (nfile + numAllocatedDescs >= max_safe_fds)
diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c
index a85a1c6..946e56f 100644
--- a/src/backend/storage/ipc/ipc.c
+++ b/src/backend/storage/ipc/ipc.c
@@ -304,6 +304,13 @@ atexit_callback(void)
 void
 on_proc_exit(pg_on_exit_callback function, Datum arg)
 {
+	int i = on_proc_exit_index;
+
+	while (--i >= 0)
+	{
+		if (on_proc_exit_list[i].function == function && on_proc_exit_list[i].arg == arg)
+			return;
+	}
 	if (on_proc_exit_index >= MAX_ON_EXITS)
 		ereport(FATAL,
 				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
@@ -413,3 +420,12 @@ on_exit_reset(void)
 	on_proc_exit_index = 0;
 	reset_on_dsm_detach();
 }
+
+void
+on_shmem_exit_reset(void)
+{
+	before_shmem_exit_index = 0;
+	on_shmem_exit_index = 0;
+	on_proc_exit_index = 0;
+	reset_on_dsm_detach();
+}
diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c
index 0c86a58..10e4613 100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -28,6 +28,7 @@
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
 #include "postmaster/postmaster.h"
+#include "postmaster/connpool.h"
 #include "replication/logicallauncher.h"
 #include "replication/slot.h"
 #include "replication/walreceiver.h"
@@ -150,6 +151,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 		size = add_size(size, SyncScanShmemSize());
 		size = add_size(size, AsyncShmemSize());
 		size = add_size(size, BackendRandomShmemSize());
+		size = add_size(size, ConnPoolShmemSize());
 #ifdef EXEC_BACKEND
 		size = add_size(size, ShmemBackendArraySize());
 #endif
@@ -271,6 +273,11 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
 	AsyncShmemInit();
 	BackendRandomShmemInit();
 
+	/*
+	 * Set up connection pool workers
+	 */
+	ConnectionPoolWorkersInit();
+
 #ifdef EXEC_BACKEND
 
 	/*
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index f6dda9c..3c2a126 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -76,6 +76,7 @@ struct WaitEventSet
 {
 	int			nevents;		/* number of registered events */
 	int			nevents_space;	/* maximum number of events in this set */
+	int         free_events;    /* L1-list of free events linked by "pos" and terminated by -1*/
 
 	/*
 	 * Array, of nevents_space length, storing the definition of events this
@@ -129,9 +130,9 @@ static void drainSelfPipe(void);
 #if defined(WAIT_USE_EPOLL)
 static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
 #elif defined(WAIT_USE_POLL)
-static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove);
 #elif defined(WAIT_USE_WIN32)
-static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove);
 #endif
 
 static inline int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
@@ -562,6 +563,7 @@ CreateWaitEventSet(MemoryContext context, int nevents)
 
 	set->latch = NULL;
 	set->nevents_space = nevents;
+	set->free_events = -1;
 
 #if defined(WAIT_USE_EPOLL)
 #ifdef EPOLL_CLOEXEC
@@ -667,9 +669,11 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 				  void *user_data)
 {
 	WaitEvent  *event;
+	int free_event;
 
 	/* not enough space */
-	Assert(set->nevents < set->nevents_space);
+	if (set->nevents == set->nevents_space)
+		return -1;
 
 	if (latch)
 	{
@@ -690,8 +694,19 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 	if (fd == PGINVALID_SOCKET && (events & WL_SOCKET_MASK))
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	event = &set->events[set->nevents];
-	event->pos = set->nevents++;
+	free_event = set->free_events;
+	if (free_event >= 0)
+	{
+		event = &set->events[free_event];
+		set->free_events = event->pos;
+		event->pos = free_event;
+	}
+	else
+	{
+		event = &set->events[set->nevents];
+		event->pos = set->nevents;
+	}
+	set->nevents += 1;
 	event->fd = fd;
 	event->events = events;
 	event->user_data = user_data;
@@ -718,15 +733,30 @@ AddWaitEventToSet(WaitEventSet *set, uint32 events, pgsocket fd, Latch *latch,
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 
 	return event->pos;
 }
 
 /*
+ * Remove event with specified socket descriptor
+ */
+void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos)
+{
+	WaitEvent  *event = &set->events[event_pos];
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_DEL);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event, true);
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event, true);
+#endif
+}
+
+/*
  * Change the event mask and, in the WL_LATCH_SET case, the latch associated
  * with the WaitEvent.
  *
@@ -737,7 +767,7 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 {
 	WaitEvent  *event;
 
-	Assert(pos < set->nevents);
+	Assert(pos < set->nevents_space);
 
 	event = &set->events[pos];
 
@@ -774,9 +804,9 @@ ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
 #if defined(WAIT_USE_EPOLL)
 	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
 #elif defined(WAIT_USE_POLL)
-	WaitEventAdjustPoll(set, event);
+	WaitEventAdjustPoll(set, event, false);
 #elif defined(WAIT_USE_WIN32)
-	WaitEventAdjustWin32(set, event);
+	WaitEventAdjustWin32(set, event, false);
 #endif
 }
 
@@ -822,19 +852,37 @@ WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
 	 * requiring that, and actually it makes the code simpler...
 	 */
 	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
-
 	if (rc < 0)
 		ereport(ERROR,
 				(errcode_for_socket_access(),
 				 errmsg("epoll_ctl() failed: %m")));
+
+	if (action == EPOLL_CTL_DEL)
+	{
+		int pos = event->pos;
+		event->fd = PGINVALID_SOCKET;
+		set->nevents -= 1;
+		event->pos = set->free_events;
+		set->free_events = pos;
+	}
 }
 #endif
 
 #if defined(WAIT_USE_POLL)
 static void
-WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	struct pollfd *pollfd = &set->pollfds[event->pos];
+	int pos = event->pos;
+	struct pollfd *pollfd = &set->pollfds[pos];
+
+	if (remove)
+	{
+		set->nevents -= 1;
+		*pollfd = set->pollfds[set->nevents];
+		set->events[pos] = set->events[set->nevents];
+		event->pos = pos;
+		return;
+	}
 
 	pollfd->revents = 0;
 	pollfd->fd = event->fd;
@@ -865,9 +913,25 @@ WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
 
 #if defined(WAIT_USE_WIN32)
 static void
-WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event, bool remove)
 {
-	HANDLE	   *handle = &set->handles[event->pos + 1];
+	int pos = event->pos;
+	HANDLE	   *handle = &set->handles[pos + 1];
+
+	if (remove)
+	{
+		Assert(event->fd != PGINVALID_SOCKET);
+
+		if (*handle != WSA_INVALID_EVENT)
+			WSACloseEvent(*handle);
+
+		set->nevents -= 1;
+		set->events[pos] = set->events[set->nevents];
+		*handle = set->handles[set->nevents + 1];
+		set->handles[set->nevents + 1] = WSA_INVALID_EVENT;
+		event->pos = pos;
+		return;
+	}
 
 	if (event->events == WL_LATCH_SET)
 	{
@@ -880,7 +944,7 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 	}
 	else
 	{
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int flags = FD_CLOSE;	/* always check for errors/EOF */
 
 		if (event->events & WL_SOCKET_READABLE)
 			flags |= FD_READ;
@@ -897,8 +961,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
 					 WSAGetLastError());
 		}
 		if (WSAEventSelect(event->fd, *handle, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
+			elog(ERROR, "failed to set up event for socket %p: error code %u",
+				 event->fd, WSAGetLastError());
 
 		Assert(event->fd != PGINVALID_SOCKET);
 	}
@@ -1296,7 +1360,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
 	{
 		if (cur_event->reset)
 		{
-			WaitEventAdjustWin32(set, cur_event);
+			WaitEventAdjustWin32(set, cur_event, false);
 			cur_event->reset = false;
 		}
 
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 6f9aaa5..f80861a 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -367,6 +367,9 @@ InitProcess(void)
 	MyPgXact->xid = InvalidTransactionId;
 	MyPgXact->xmin = InvalidTransactionId;
 	MyProc->pid = MyProcPid;
+	MyProc->nReadySessions = 0;
+	MyProc->nSessionSchedules = 0;
+	MyProc->nFinishedSessions = 0;
 	/* backendId, databaseId and roleId will be filled in later */
 	MyProc->backendId = InvalidBackendId;
 	MyProc->databaseId = InvalidOid;
@@ -597,6 +600,15 @@ InitAuxiliaryProcess(void)
 }
 
 /*
+ * Generate unique session ID.
+ */
+uint32
+CreateSessionId(void)
+{
+	return ++SessionPool->sessionCount;
+}
+
+/*
  * Record the PID and PGPROC structures for the Startup process, for use in
  * ProcSendSignal().  See comments there for further explanation.
  */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 7a9ada2..44a4281 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -40,6 +40,7 @@
 #include "access/printtup.h"
 #include "access/xact.h"
 #include "catalog/pg_type.h"
+#include "catalog/namespace.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
@@ -65,6 +66,7 @@
 #include "storage/bufmgr.h"
 #include "storage/ipc.h"
 #include "storage/proc.h"
+#include "storage/procarray.h"
 #include "storage/procsignal.h"
 #include "storage/sinval.h"
 #include "tcop/fastpath.h"
@@ -77,9 +79,12 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/builtins.h"
+#include "utils/varlena.h"
+#include "utils/inval.h"
+#include "utils/catcache.h"
 #include "mb/pg_wchar.h"
 
-
 /* ----------------
  *		global variables
  * ----------------
@@ -100,6 +105,41 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
+/* Local socket for redirecting sessions to the backends */
+pgsocket SessionPoolSock = PGINVALID_SOCKET;
+
+/* Pointer to pool of sessions */
+BackendSessionPool	   *SessionPool = NULL;
+
+/* Pointer to the active session */
+SessionContext		   *ActiveSession;
+SessionContext		    DefaultContext;
+bool					IsDedicatedBackend = false;
+
+#define SessionVariable(type,name,init)  type name = init;
+#include "storage/sessionvars.h"
+
+static void SaveSessionVariables(SessionContext* session)
+{
+	if (session != NULL)
+	{
+#define SessionVariable(type,name,init) session->name = name;
+#include "storage/sessionvars.h"
+	}
+}
+
+static void LoadSessionVariables(SessionContext* session)
+{
+#define SessionVariable(type,name,init) name = session->name;
+#include "storage/sessionvars.h"
+}
+
+static void InitializeSessionVariables(SessionContext* session)
+{
+#define SessionVariable(type,name,init) session->name = DefaultContext.name;
+#include "storage/sessionvars.h"
+}
+
 
 
 /* ----------------
@@ -171,6 +211,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+static bool IdleInTransactionSessionError;
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -196,6 +238,8 @@ static void log_disconnections(int code, Datum arg);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
+static void DeleteSession(SessionContext *session);
+static void ResetCurrentSession(void);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -1234,10 +1278,6 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	bool		save_log_statement_stats = log_statement_stats;
 	char		msec_str[32];
 
-	/*
-	 * Report query to various monitoring facilities.
-	 */
-	debug_query_string = query_string;
 
 	pgstat_report_activity(STATE_RUNNING, query_string);
 
@@ -2930,9 +2970,28 @@ ProcessInterrupts(void)
 		LockErrorCleanup();
 		/* don't send to client, we already know the connection to be dead. */
 		whereToSendOutput = DestNone;
-		ereport(FATAL,
-				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("connection to client lost")));
+
+		if (ActiveSession)
+		{
+			Port *port = ActiveSession->port;
+			pgsocket sock = port->sock;
+			elog(LOG, "Lost connection on session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+			port->sock = PGINVALID_SOCKET;
+
+			MyProcPort = NULL;
+
+			StartTransactionCommand();
+			UserAbortTransactionBlock();
+			CommitTransactionCommand();
+
+			ResetCurrentSession();
+			closesocket(sock);
+		}
+		else
+			ereport(FATAL,
+					(errcode(ERRCODE_CONNECTION_FAILURE),
+					 errmsg("connection to client lost")));
 	}
 
 	/*
@@ -3043,9 +3102,20 @@ ProcessInterrupts(void)
 	{
 		/* Has the timeout setting changed since last we looked? */
 		if (IdleInTransactionSessionTimeout > 0)
-			ereport(FATAL,
-					(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
-					 errmsg("terminating connection due to idle-in-transaction timeout")));
+		{
+			if (ActiveSession)
+			{
+				IdleInTransactionSessionTimeoutPending = false;
+				IdleInTransactionSessionError = true;
+				ereport(ERROR,
+						(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+						 errmsg("canceling current transaction due to idle-in-transaction timeout")));
+			}
+			else
+				ereport(FATAL,
+						(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+						 errmsg("terminating connection due to idle-in-transaction timeout")));
+		}
 		else
 			IdleInTransactionSessionTimeoutPending = false;
 
@@ -3605,6 +3675,126 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx,
 #endif
 }
 
+#define ACTIVE_SESSION_MAGIC    0xDEFA1234U
+#define REMOVED_SESSION_MAGIC   0xDEADDEEDU
+#define MIN_FREE_FDS            10
+#define DESCRIPTORS_PER_SESSION 2
+
+static int nActiveSessions = 0;
+static int maxActiveSessions = 0;
+
+static SessionContext *
+CreateSession(void)
+{
+	SessionContext *session = (SessionContext *)
+		MemoryContextAllocZero(SessionPool->mcxt, sizeof(SessionContext));
+
+	session->memory = AllocSetContextCreate(SessionPool->mcxt,
+		"SessionMemoryContext", ALLOCSET_DEFAULT_SIZES);
+	session->prepared_queries = NULL;
+	session->id = CreateSessionId();
+	session->portals = CreatePortalsHashTable(session->memory);
+	session->magic = ACTIVE_SESSION_MAGIC;
+	session->eventPos = -1;
+	nActiveSessions += 1;
+	if (nActiveSessions > maxActiveSessions)
+	{
+		int new_max_safe_fds = max_safe_fds - (nActiveSessions - maxActiveSessions)*DESCRIPTORS_PER_SESSION;
+		if (new_max_safe_fds >= MIN_FREE_FDS)
+		{
+			max_safe_fds = new_max_safe_fds;
+			/* Ensure that we have enough free descriptors to establish new session.
+			 * Unlike fd.c, which throws away lest recently used file descriptors
+			 * only when open() call is failed, we prefer more conservative (pessimistic)
+			 * aporach here.
+			 */
+			ReleaseLruFiles();
+		}
+		else
+			elog(WARNING, "Too few free file desriptors %d for %d sessions", new_max_safe_fds, maxActiveSessions);
+		maxActiveSessions = nActiveSessions;
+	}
+	return session;
+}
+
+static void
+SwitchToSession(SessionContext *session)
+{
+	/* epoll may return even for already closed session if socket is still openned.
+	 * From epoll documentation:
+	 * Q6  Will closing a file descriptor cause it to be removed from all epoll sets automatically?
+	 *
+     * A6  Yes, but be aware of the following point.  A file descriptor is a reference to an open file description (see
+     *     open(2)).  Whenever a descriptor is duplicated via dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new file
+     *     descriptor referring to the same open file description is created.  An open file  description  continues  to
+     *     exist  until  all  file  descriptors referring to it have been closed.  A file descriptor is removed from an
+     *     epoll set only after all the file descriptors referring to the underlying open file  description  have  been
+     *     closed  (or  before  if  the descriptor is explicitly removed using epoll_ctl(2) EPOLL_CTL_DEL).  This means
+     *     that even after a file descriptor that is part of an epoll set has been closed, events may be  reported  for
+     *     that  file  descriptor  if  other  file descriptors referring to the same underlying file description remain
+     *     open.
+     *
+     *     Using this check for valid magic field we try to ignore such events.
+	 */
+	if (ActiveSession == session || session->magic != ACTIVE_SESSION_MAGIC)
+		return;
+
+	SaveSessionVariables(ActiveSession);
+	RestoreSessionGUCs(ActiveSession);
+	ActiveSession = session;
+
+	MyProcPort = session->port;
+	SetTempNamespaceState(session->tempNamespace,
+						  session->tempToastNamespace);
+	pq_set_current_state(session->port->pqcomm_state, session->port,
+						 session->eventSet);
+	whereToSendOutput = DestRemote;
+
+	RestoreSessionGUCs(session);
+	LoadSessionVariables(session);
+}
+
+static void
+ResetCurrentSession(void)
+{
+	if (!ActiveSession)
+		return;
+
+	whereToSendOutput = DestNone;
+	DeleteSession(ActiveSession);
+	pq_set_current_state(NULL, NULL, NULL);
+	SetTempNamespaceState(InvalidOid, InvalidOid);
+	ActiveSession = NULL;
+}
+
+/*
+ * Free all memory associated with session and delete session object itself.
+ */
+static void
+DeleteSession(SessionContext *session)
+{
+	elog(DEBUG1, "Delete session %p, id=%u,  memory context=%p",
+			session, session->id, session->memory);
+
+	if (OidIsValid(session->tempNamespace))
+		ResetTempTableNamespace(session->tempNamespace);
+
+	MyProc->nFinishedSessions += 1;
+	nActiveSessions -= 1;
+
+	DropAllPreparedStatements();
+	if (session->eventPos >= 0)
+		DeleteWaitEventFromSet(SessionPool->waitEvents, session->eventPos);
+	FreeWaitEventSet(session->eventSet);
+	RestoreSessionGUCs(session);
+	ReleaseSessionGUCs(session);
+	MemoryContextDelete(session->memory);
+	session->magic = REMOVED_SESSION_MAGIC;
+	pfree(session);
+
+	on_shmem_exit_reset();
+	pgstat_report_stat(true);
+}
 
 /* ----------------------------------------------------------------
  * PostgresMain
@@ -3627,6 +3817,10 @@ PostgresMain(int argc, char *argv[],
 	sigjmp_buf	local_sigjmp_buf;
 	volatile bool send_ready_for_query = true;
 	bool		disable_idle_in_transaction_timeout = false;
+	WaitEvent*  ready_clients = NULL;
+	int         n_ready_clients = 0;
+	int         ready_client_index = 0;
+	int         max_events = 0;
 
 	/* Initialize startup process environment if necessary. */
 	if (!IsUnderPostmaster)
@@ -3656,6 +3850,35 @@ PostgresMain(int argc, char *argv[],
 							progname)));
 	}
 
+	/* Serve all conections to "postgres" database by dedicated backends */
+	if (IsDedicatedBackend)
+	{
+		SessionPoolSize = 0;
+		closesocket(SessionPoolSock);
+		SessionPoolSock = PGINVALID_SOCKET;
+	}
+
+	if (IsUnderPostmaster && !IsDedicatedBackend)
+	{
+		elog(DEBUG1, "Session pooling is active on %s database", dbname);
+
+		/* Initialize sessions pool for this backend */
+		Assert(SessionPool == NULL);
+		SessionPool = (BackendSessionPool *) MemoryContextAllocZero(
+				TopMemoryContext, sizeof(BackendSessionPool));
+		SessionPool->mcxt = AllocSetContextCreate(TopMemoryContext,
+			"SessionPoolContext", ALLOCSET_DEFAULT_SIZES);
+
+		/* Save the original backend port here */
+		SessionPool->backendPort = MyProcPort;
+
+		ActiveSession = CreateSession();
+		ActiveSession->port = MyProcPort;
+		ActiveSession->eventSet = pq_get_current_waitset();
+		max_events = MaxSessions + 3; /* 3 extra events are used for listening postmaster socket, MyLatch and postmaster death watchdog */
+		ready_clients = (WaitEvent*) MemoryContextAlloc(TopMemoryContext, sizeof(WaitEvent)*max_events);
+	}
+
 	/* Acquire configuration parameters, unless inherited from postmaster */
 	if (!IsUnderPostmaster)
 	{
@@ -3784,7 +4007,7 @@ PostgresMain(int argc, char *argv[],
 	 * ... else we'd need to copy the Port data first.  Also, subsidiary data
 	 * such as the username isn't lost either; see ProcessStartupPacket().
 	 */
-	if (PostmasterContext)
+	if (PostmasterContext && SessionPoolSize == 0)
 	{
 		MemoryContextDelete(PostmasterContext);
 		PostmasterContext = NULL;
@@ -3922,7 +4145,8 @@ PostgresMain(int argc, char *argv[],
 		pq_comm_reset();
 
 		/* Report the error to the client and/or server log */
-		EmitErrorReport();
+		if (MyProcPort)
+			EmitErrorReport();
 
 		/*
 		 * Make sure debug_query_string gets reset before we possibly clobber
@@ -3982,13 +4206,27 @@ PostgresMain(int argc, char *argv[],
 		 * messages from the client, so there isn't much we can do with the
 		 * connection anymore.
 		 */
-		if (pq_is_reading_msg())
+		if (pq_is_reading_msg() && !ActiveSession)
 			ereport(FATAL,
 					(errcode(ERRCODE_PROTOCOL_VIOLATION),
 					 errmsg("terminating connection because protocol synchronization was lost")));
 
 		/* Now we can allow interrupts again */
 		RESUME_INTERRUPTS();
+
+		if (ActiveSession)
+		{
+			whereToSendOutput = DestRemote;
+			if (IdleInTransactionSessionError || (IsAbortedTransactionBlockState() && pq_is_reading_msg()))
+			{
+				StartTransactionCommand();
+				UserAbortTransactionBlock();
+				CommitTransactionCommand();
+				IdleInTransactionSessionError = false;
+			}
+			if (pq_is_reading_msg())
+				goto CloseSession;
+		}
 	}
 
 	/* We can now handle ereport(ERROR) */
@@ -3997,10 +4235,30 @@ PostgresMain(int argc, char *argv[],
 	if (!ignore_till_sync)
 		send_ready_for_query = true;	/* initially, or after error */
 
+
+	/* Initialize wait event set if we're using sessions pool */
+	if (SessionPool && SessionPool->waitEvents == NULL)
+	{
+		/* Construct wait event set if not constructed yet */
+		SessionPool->waitEvents = CreateWaitEventSet(SessionPool->mcxt, max_events);
+		/* Add event to detect postmaster death */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_POSTMASTER_DEATH,
+				PGINVALID_SOCKET, NULL, ActiveSession);
+		/* Add event for backends latch */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_LATCH_SET,
+				PGINVALID_SOCKET, MyLatch, ActiveSession);
+		/* Add event for accepting new sessions */
+		AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+				SessionPoolSock, NULL, ActiveSession);
+		/* Add event for current session */
+		ActiveSession->eventPos = AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+				ActiveSession->port->sock, NULL, ActiveSession);
+		SaveSessionVariables(&DefaultContext);
+	}
+
 	/*
 	 * Non-error queries loop here.
 	 */
-
 	for (;;)
 	{
 		/*
@@ -4076,6 +4334,140 @@ PostgresMain(int argc, char *argv[],
 
 			ReadyForQuery(whereToSendOutput);
 			send_ready_for_query = false;
+
+			/*
+			 * Here we perform multiplexing of client sessions if session pooling is enabled.
+			 * As far as we perform transaction level pooling,
+			 * rescheduling is done only when we are not in transaction.
+			 */
+			if (SessionPoolSock != PGINVALID_SOCKET
+					&& !IsTransactionState()
+					&& !IsAbortedTransactionBlockState()
+					&& pq_available_bytes() == 0)
+			{
+				WaitEvent*  ready_client;
+
+			  ChooseSession:
+				DoingCommandRead = true;
+				/* Select which client session is ready to send new query */
+				if (ready_client_index == n_ready_clients)
+				{
+					n_ready_clients = WaitEventSetWait(SessionPool->waitEvents, -1,
+													   ready_clients, max_events, PG_WAIT_CLIENT);
+					if (n_ready_clients < 1)
+					{
+						/* TODO: do some error recovery here */
+						elog(FATAL, "Failed to poll client sessions");
+					}
+					ready_client_index = 0;
+					MyProc->nSessionSchedules += 1;
+					MyProc->nReadySessions += n_ready_clients;
+				}
+				ready_client = &ready_clients[ready_client_index++];
+
+				CHECK_FOR_INTERRUPTS();
+				DoingCommandRead = false;
+
+				if (ready_client->events & WL_POSTMASTER_DEATH)
+					ereport(FATAL,
+							(errcode(ERRCODE_ADMIN_SHUTDOWN),
+							 errmsg("terminating connection due to unexpected postmaster exit")));
+
+				if (ready_client->events & WL_LATCH_SET)
+				{
+					ResetLatch(MyLatch);
+					ProcessClientReadInterrupt(true);
+					goto ChooseSession;
+				}
+
+				if (ready_client->fd == SessionPoolSock)
+				{
+					/* Here we handle case of attaching new session */
+					SessionContext* session;
+					StringInfoData buf;
+					Port*    port;
+					pgsocket sock;
+					MemoryContext oldcontext;
+
+					session = CreateSession();
+
+					sock = pg_recv_sock(SessionPoolSock);
+					if (sock == PGINVALID_SOCKET)
+						elog(ERROR, "Failed to receive session socket: %m");
+
+
+					/* Initialize port and wait event set for this session */
+					oldcontext = MemoryContextSwitchTo(session->memory);
+					MyProcPort = port = palloc(sizeof(Port));
+					memcpy(port, SessionPool->backendPort, sizeof(Port));
+
+					/*
+					 * Receive the startup packet (which might turn out to be
+					 * a cancel request packet).
+					 */
+					port->sock = sock;
+					port->pqcomm_state = pq_init(session->memory);
+
+					session->port = port;
+					session->eventSet =
+						pq_create_backend_event_set(session->memory, port, false);
+					pq_set_current_state(session->port->pqcomm_state,
+										 port,
+										 session->eventSet);
+					whereToSendOutput = DestRemote;
+
+					MemoryContextSwitchTo(oldcontext);
+
+					session->eventPos = AddWaitEventToSet(SessionPool->waitEvents, WL_SOCKET_READABLE,
+														  sock, NULL, session);
+					if (session->eventPos < 0)
+					{
+						elog(WARNING, "Too much pooled sessions: %d", MaxSessions);
+						DeleteSession(session);
+						ActiveSession = NULL;
+						closesocket(sock);
+						goto ChooseSession;
+					}
+
+					elog(DEBUG1, "Start new session %d in backend %d "
+						"for database %s user %s", (int)sock, MyProcPid,
+						port->database_name, port->user_name);
+
+					SaveSessionVariables(ActiveSession);
+					RestoreSessionGUCs(ActiveSession);
+					ActiveSession = session;
+					InitializeSessionVariables(session);
+					LoadSessionVariables(session);
+					SetCurrentStatementStartTimestamp();
+					StartTransactionCommand();
+					PerformAuthentication(MyProcPort);
+					process_settings(MyDatabaseId, GetSessionUserId());
+					CommitTransactionCommand();
+					SetTempNamespaceState(InvalidOid, InvalidOid);
+
+					/*
+					 * Send GUC options to the client
+					 */
+					BeginReportingGUCOptions();
+
+					/*
+					 * Send this backend's cancellation info to the frontend.
+					 */
+					pq_beginmessage(&buf, 'K');
+					pq_sendint(&buf, (int32) MyProcPid, 4);
+					pq_sendint(&buf, (int32) MyCancelKey, 4);
+					pq_endmessage(&buf);
+					/* Need not flush since ReadyForQuery will do it. */
+
+					ReadyForQuery(whereToSendOutput);
+					goto ChooseSession;
+				}
+				else
+				{
+					SessionContext* session = (SessionContext *) ready_client->user_data;
+					SwitchToSession(session);
+				}
+			}
 		}
 
 		/*
@@ -4118,6 +4510,8 @@ PostgresMain(int argc, char *argv[],
 		 */
 		if (ConfigReloadPending)
 		{
+			if (ActiveSession && RestartPoolerOnReload)
+				proc_exit(0);
 			ConfigReloadPending = false;
 			ProcessConfigFile(PGC_SIGHUP);
 		}
@@ -4355,6 +4749,50 @@ PostgresMain(int argc, char *argv[],
 				 * it will fail to be called during other backend-shutdown
 				 * scenarios.
 				 */
+
+				if (SessionPool)
+				{
+					pgsocket sock;
+
+				  CloseSession:
+					sock = PGINVALID_SOCKET;
+
+					/* In case of session pooling close the session, but do not terminate the backend
+					 * even if there are not more sessions in this backend.
+					 * The reason for keeping backend alive is to prevent redundant process launches if
+					 * some client repeatedly open/close connection to the database.
+					 * Maximal number of launched backends in case of connection pooling is intended to be
+					 * optimal for this system and workload, so there are no reasons to try to reduce this number
+					 * when there are no active sessions.
+					 */
+					if (MyProcPort)
+					{
+						elog(DEBUG1, "Closing session %d in backend %d", MyProcPort->sock, MyProcPid);
+
+						pq_getmsgend(&input_message);
+						if (pq_is_reading_msg())
+							pq_endmsgread();
+
+						sock = MyProcPort->sock;
+						MyProcPort->sock = PGINVALID_SOCKET;
+						MyProcPort = NULL;
+					}
+					if (ActiveSession)
+					{
+						StartTransactionCommand();
+						UserAbortTransactionBlock();
+						CommitTransactionCommand();
+
+						ResetCurrentSession();
+					}
+					if (sock != PGINVALID_SOCKET)
+						closesocket(sock);
+
+					/* Need to perform rescheduling to some other session or accept new session */
+					send_ready_for_query = true;
+					goto ChooseSession;
+				}
+				elog(DEBUG1, "Terminate backend %d", MyProcPid);
 				proc_exit(0);
 
 			case 'd':			/* copy data */
@@ -4618,3 +5056,13 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+Datum pg_backend_load_average(PG_FUNCTION_ARGS)
+{
+	int pid = PG_GETARG_INT64(0);
+	PGPROC* proc = BackendPidGetProc(pid);
+	if (proc == NULL)
+		PG_RETURN_NULL();
+	else
+		PG_RETURN_FLOAT8(proc->nSessionSchedules == 0 ? 0.0 : (double)proc->nReadySessions / proc->nSessionSchedules);
+}
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index e95e347..6726195 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -875,6 +875,17 @@ pg_backend_pid(PG_FUNCTION_ARGS)
 	PG_RETURN_INT32(MyProcPid);
 }
 
+Datum
+pg_session_id(PG_FUNCTION_ARGS)
+{
+	char	*s;
+	if (ActiveSession)
+		s = psprintf("%d.%u", MyProcPid, ActiveSession->id);
+	else
+		s = psprintf("%d", MyProcPid);
+
+	PG_RETURN_TEXT_P(CStringGetTextDatum(s));
+}
 
 Datum
 pg_stat_get_backend_pid(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index 7271b58..6b0cb54 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -61,6 +61,7 @@
 #include "parser/analyze.h"
 #include "parser/parsetree.h"
 #include "storage/lmgr.h"
+#include "storage/proc.h"
 #include "tcop/pquery.h"
 #include "tcop/utility.h"
 #include "utils/inval.h"
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 6125421..7ce5671 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -78,6 +78,7 @@
 #include "rewrite/rewriteDefine.h"
 #include "rewrite/rowsecurity.h"
 #include "storage/lmgr.h"
+#include "storage/proc.h"
 #include "storage/smgr.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
@@ -1943,6 +1944,13 @@ RelationIdGetRelation(Oid relationId)
 			Assert(rd->rd_isvalid ||
 				   (rd->rd_isnailed && !criticalRelcachesBuilt));
 		}
+		/*
+		 * In case of session pooling, relation descriptor can be constructed by some other session,
+		 * so we need to recheck rd_islocaltemp value
+		 */
+		if (ActiveSession && RELATION_IS_OTHER_TEMP(rd) && isTempOrTempToastNamespace(rd->rd_rel->relnamespace))
+			rd->rd_islocaltemp = true;
+
 		return rd;
 	}
 
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index f7d6617..cf83123 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -128,7 +128,10 @@ int			max_parallel_maintenance_workers = 2;
  * register background workers.
  */
 int			NBuffers = 1000;
+int			SessionPoolSize = 0;
 int			MaxConnections = 90;
+int			MaxSessions = 1000;
+int			SessionSchedule = SESSION_SCHED_ROUND_ROBIN;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
 int			MaxBackends = 0;
@@ -147,3 +150,6 @@ int			VacuumCostBalance = 0;	/* working state for vacuum */
 bool		VacuumCostActive = false;
 
 double		vacuum_cleanup_index_scale_factor;
+
+bool        RestartPoolerOnReload = false;
+char       *DedicatedDatabases;
diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c
index 865119d..715429a 100644
--- a/src/backend/utils/init/miscinit.c
+++ b/src/backend/utils/init/miscinit.c
@@ -250,19 +250,6 @@ ChangeToDataDir(void)
  * convenient way to do it.
  * ----------------------------------------------------------------
  */
-static Oid	AuthenticatedUserId = InvalidOid;
-static Oid	SessionUserId = InvalidOid;
-static Oid	OuterUserId = InvalidOid;
-static Oid	CurrentUserId = InvalidOid;
-
-/* We also have to remember the superuser state of some of these levels */
-static bool AuthenticatedUserIsSuperuser = false;
-static bool SessionUserIsSuperuser = false;
-
-static int	SecurityRestrictionContext = 0;
-
-/* We also remember if a SET ROLE is currently active */
-static bool SetRoleIsActive = false;
 
 /*
  * Initialize the basic environment for a postmaster child
@@ -345,13 +332,15 @@ InitStandaloneProcess(const char *argv0)
 void
 SwitchToSharedLatch(void)
 {
+	WaitEventSet *waitset;
 	Assert(MyLatch == &LocalLatchData);
 	Assert(MyProc != NULL);
 
 	MyLatch = &MyProc->procLatch;
 
-	if (FeBeWaitSet)
-		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+	waitset = pq_get_current_waitset();
+	if (waitset)
+		ModifyWaitEvent(waitset, 1, WL_LATCH_SET, MyLatch);
 
 	/*
 	 * Set the shared latch as the local one might have been set. This
@@ -364,13 +353,15 @@ SwitchToSharedLatch(void)
 void
 SwitchBackToLocalLatch(void)
 {
+	WaitEventSet *waitset;
 	Assert(MyLatch != &LocalLatchData);
 	Assert(MyProc != NULL && MyLatch == &MyProc->procLatch);
 
 	MyLatch = &LocalLatchData;
 
-	if (FeBeWaitSet)
-		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+	waitset = pq_get_current_waitset();
+	if (waitset)
+		ModifyWaitEvent(waitset, 1, WL_LATCH_SET, MyLatch);
 
 	SetLatch(MyLatch);
 }
@@ -434,6 +425,8 @@ SetSessionUserId(Oid userid, bool is_superuser)
 	/* We force the effective user IDs to match, too */
 	OuterUserId = userid;
 	CurrentUserId = userid;
+
+	SysCacheInvalidate(AUTHOID, 0);
 }
 
 /*
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 5ef6315..f1d6834 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -62,10 +62,8 @@
 #include "utils/timeout.h"
 #include "utils/tqual.h"
 
-
 static HeapTuple GetDatabaseTuple(const char *dbname);
 static HeapTuple GetDatabaseTupleByOid(Oid dboid);
-static void PerformAuthentication(Port *port);
 static void CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections);
 static void InitCommunication(void);
 static void ShutdownPostgres(int code, Datum arg);
@@ -74,7 +72,6 @@ static void LockTimeoutHandler(void);
 static void IdleInTransactionSessionTimeoutHandler(void);
 static bool ThereIsAtLeastOneRole(void);
 static void process_startup_options(Port *port, bool am_superuser);
-static void process_settings(Oid databaseid, Oid roleid);
 
 
 /*** InitPostgres support ***/
@@ -180,7 +177,7 @@ GetDatabaseTupleByOid(Oid dboid)
  *
  * returns: nothing.  Will not return at all if there's any failure.
  */
-static void
+void
 PerformAuthentication(Port *port)
 {
 	/* This should be set already, but let's make sure */
@@ -1126,7 +1123,7 @@ process_startup_options(Port *port, bool am_superuser)
  * We try specific settings for the database/role combination, as well as
  * general for this database and for this user.
  */
-static void
+void
 process_settings(Oid databaseid, Oid roleid)
 {
 	Relation	relsetting;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 0625eff..835dabc 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -59,6 +59,7 @@
 #include "postmaster/autovacuum.h"
 #include "postmaster/bgworker_internals.h"
 #include "postmaster/bgwriter.h"
+#include "postmaster/connpool.h"
 #include "postmaster/postmaster.h"
 #include "postmaster/syslogger.h"
 #include "postmaster/walwriter.h"
@@ -428,6 +429,15 @@ static const struct config_enum_entry password_encryption_options[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry session_schedule_options[] = {
+	{"round-robin", SESSION_SCHED_ROUND_ROBIN, false},
+	{"random", SESSION_SCHED_RANDOM, false},
+	{"load-balancing", SESSION_SCHED_LOAD_BALANCING, false},
+	{NULL, 0, false}
+};
+
+
+
 /*
  * Options for enum values stored in other modules
  */
@@ -587,6 +597,8 @@ const char *const config_group_names[] =
 	gettext_noop("Connections and Authentication / Authentication"),
 	/* CONN_AUTH_SSL */
 	gettext_noop("Connections and Authentication / SSL"),
+	/* CONN_POOLING */
+	gettext_noop("Connections and Authentication / Connection Pooling"),
 	/* RESOURCES */
 	gettext_noop("Resource Usage"),
 	/* RESOURCES_MEM */
@@ -1192,6 +1204,16 @@ static struct config_bool ConfigureNamesBool[] =
 	},
 
 	{
+		{"restart_pooler_on_reload", PGC_SIGHUP, CONN_POOLING,
+		 gettext_noop("Restart session pool workers on pg_reload_conf()."),
+		 NULL,
+		},
+		&RestartPoolerOnReload,
+		false,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"log_duration", PGC_SUSET, LOGGING_WHAT,
 			gettext_noop("Logs the duration of each completed SQL statement."),
 			NULL
@@ -1998,8 +2020,41 @@ static struct config_int ConfigureNamesInt[] =
 		check_maxconnections, NULL, NULL
 	},
 
+ 	{
+		{"max_sessions", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets the maximum number of client session."),
+			gettext_noop("Maximal number of client sessions which can be handled by one backend if session pooling is switched on. "
+						 "So maximal number of client connections is session_pool_size*max_sessions")
+		},
+		&MaxSessions,
+		1000, 1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
-		/* see max_connections and max_wal_senders */
+		{"session_pool_size", PGC_POSTMASTER, CONN_POOLING,
+			gettext_noop("Sets number of backends serving client sessions."),
+			gettext_noop("If non-zero then session pooling will be used: "
+						 "client connections will be redirected to one of the backends and maximal number of backends is determined by this parameter."
+						 "Launched backend are never terminated even in case of no active sessions.")
+		},
+		&SessionPoolSize,
+		10, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"connection_pool_workers", PGC_POSTMASTER, CONN_POOLING,
+		 gettext_noop("Number of connection pool workers"),
+		 NULL,
+	    },
+		&NumConnPoolWorkers,
+		2, 0, MAX_CONNPOOL_WORKERS,
+		NULL, NULL, NULL
+	},
+
+	{
+	/* see max_connections and max_wal_senders */
 		{"superuser_reserved_connections", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
 			gettext_noop("Sets the number of connection slots reserved for superusers."),
 			NULL
@@ -3340,9 +3395,9 @@ static struct config_string ConfigureNamesString[] =
 
 	{
 		{"temp_tablespaces", PGC_USERSET, CLIENT_CONN_STATEMENT,
-			gettext_noop("Sets the tablespace(s) to use for temporary tables and sort files."),
-			NULL,
-			GUC_LIST_INPUT | GUC_LIST_QUOTE
+		    gettext_noop("Sets the tablespace(s) to use for temporary tables and sort files."),
+		    NULL,
+		    GUC_LIST_INPUT | GUC_LIST_QUOTE
 		},
 		&temp_tablespaces,
 		"",
@@ -3350,6 +3405,16 @@ static struct config_string ConfigureNamesString[] =
 	},
 
 	{
+		{"dedicated_databases", PGC_USERSET, CONN_POOLING,
+			gettext_noop("Set of databases for which session pooling is disabled."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE
+		},
+		&DedicatedDatabases,
+		"template0, template1, postgres"
+	},
+
+	{
 		{"dynamic_library_path", PGC_SUSET, CLIENT_CONN_OTHER,
 			gettext_noop("Sets the path for dynamically loadable modules."),
 			gettext_noop("If a dynamically loadable module needs to be opened and "
@@ -4185,6 +4250,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"session_schedule", PGC_POSTMASTER, RESOURCES_MEM,
+			gettext_noop("Session schedule policy for connection pool."),
+			NULL
+		},
+		&SessionSchedule,
+		SESSION_SCHED_ROUND_ROBIN, session_schedule_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
@@ -5346,6 +5421,164 @@ NewGUCNestLevel(void)
 }
 
 /*
+ * Save changed variables after SET command.
+ * It's important to restore variables as we add them to the list.
+ */
+static void
+SaveSessionGUCs(SessionContext *session,
+				struct config_generic *gconf,
+				config_var_value *prior_val)
+{
+	SessionGUC	*sg;
+
+	/* Find needed GUC in active session */
+	for (sg = session->gucs;
+			sg != NULL && sg->var != gconf; sg = sg->next);
+
+	if (sg != NULL)
+		/* already there */
+		return;
+
+	sg = MemoryContextAllocZero(session->memory, sizeof(SessionGUC));
+	sg->var = gconf;
+	sg->saved.extra = prior_val->extra;
+
+	switch (gconf->vartype)
+	{
+		case PGC_BOOL:
+			sg->saved.val.boolval = prior_val->val.boolval;
+			break;
+		case PGC_INT:
+			sg->saved.val.intval = prior_val->val.intval;
+			break;
+		case PGC_REAL:
+			sg->saved.val.realval = prior_val->val.realval;
+			break;
+		case PGC_STRING:
+			sg->saved.val.stringval = prior_val->val.stringval;
+			break;
+		case PGC_ENUM:
+			sg->saved.val.enumval = prior_val->val.enumval;
+			break;
+	}
+
+	if (session->gucs)
+	{
+		SessionGUC	*latest;
+
+		/* Move to end of the list */
+		for (latest = session->gucs;
+				latest->next != NULL; latest = latest->next);
+		latest->next = sg;
+	}
+	else
+		session->gucs = sg;
+}
+
+/*
+ * Set GUCs for this session
+ */
+void
+RestoreSessionGUCs(SessionContext* session)
+{
+	SessionGUC	*sg;
+	bool save_reporting_enabled;
+
+	if (session == NULL)
+		return;
+
+	save_reporting_enabled = reporting_enabled;
+	reporting_enabled = false;
+
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		void	*saved_extra = sg->saved.extra;
+		void	*old_extra = sg->var->extra;
+
+		/* restore extra */
+		sg->var->extra = saved_extra;
+		sg->saved.extra = old_extra;
+
+		/* restore actual values */
+		switch (sg->var->vartype)
+		{
+			case PGC_BOOL:
+			{
+				struct config_bool *conf = (struct config_bool *)sg->var;
+				bool oldval = *conf->variable;
+				*conf->variable = sg->saved.val.boolval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.boolval, saved_extra);
+
+				sg->saved.val.boolval = oldval;
+				break;
+			}
+			case PGC_INT:
+			{
+				struct config_int *conf = (struct config_int*) sg->var;
+				int oldval = *conf->variable;
+				*conf->variable = sg->saved.val.intval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.intval, saved_extra);
+				sg->saved.val.intval = oldval;
+				break;
+			}
+			case PGC_REAL:
+			{
+				struct config_real *conf = (struct config_real*) sg->var;
+				double oldval = *conf->variable;
+				*conf->variable = sg->saved.val.realval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.realval, saved_extra);
+				sg->saved.val.realval = oldval;
+				break;
+			}
+			case PGC_STRING:
+			{
+				struct config_string *conf = (struct config_string*) sg->var;
+				char* oldval = *conf->variable;
+				*conf->variable = sg->saved.val.stringval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.stringval, saved_extra);
+				sg->saved.val.stringval = oldval;
+				break;
+			}
+			case PGC_ENUM:
+			{
+				struct config_enum *conf = (struct config_enum*) sg->var;
+				int oldval = *conf->variable;
+				*conf->variable = sg->saved.val.enumval;
+				if (conf->assign_hook)
+					conf->assign_hook(sg->saved.val.enumval, saved_extra);
+				sg->saved.val.enumval = oldval;
+				break;
+			}
+		}
+	}
+	reporting_enabled = save_reporting_enabled;
+}
+
+/*
+ * Deallocate memory for session GUCs
+ */
+void
+ReleaseSessionGUCs(SessionContext* session)
+{
+	SessionGUC* sg;
+	for (sg = session->gucs; sg != NULL; sg = sg->next)
+	{
+		if (sg->saved.extra)
+			set_extra_field(sg->var, &sg->saved.extra, NULL);
+
+		if (sg->var->vartype == PGC_STRING)
+		{
+			struct config_string* conf = (struct config_string*)sg->var;
+			set_string_field(conf, &sg->saved.val.stringval, NULL);
+		}
+	}
+}
+
+/*
  * Do GUC processing at transaction or subtransaction commit or abort, or
  * when exiting a function that has proconfig settings, or when undoing a
  * transient assignment to some GUC variables.  (The name is thus a bit of
@@ -5413,8 +5646,10 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 					restoreMasked = true;
 				else if (stack->state == GUC_SET)
 				{
-					/* we keep the current active value */
-					discard_stack_value(gconf, &stack->prior);
+					if (ActiveSession)
+						SaveSessionGUCs(ActiveSession, gconf, &stack->prior);
+					else
+						discard_stack_value(gconf, &stack->prior);
 				}
 				else			/* must be GUC_LOCAL */
 					restorePrior = true;
@@ -5440,8 +5675,8 @@ AtEOXact_GUC(bool isCommit, int nestLevel)
 
 					case GUC_SET:
 						/* next level always becomes SET */
-						discard_stack_value(gconf, &stack->prior);
-						if (prev->state == GUC_SET_LOCAL)
+					    discard_stack_value(gconf, &stack->prior);
+					    if (prev->state == GUC_SET_LOCAL)
 							discard_stack_value(gconf, &prev->masked);
 						prev->state = GUC_SET;
 						break;
diff --git a/src/backend/utils/misc/superuser.c b/src/backend/utils/misc/superuser.c
index fbe83c9..1ebc379 100644
--- a/src/backend/utils/misc/superuser.c
+++ b/src/backend/utils/misc/superuser.c
@@ -24,6 +24,7 @@
 #include "catalog/pg_authid.h"
 #include "utils/inval.h"
 #include "utils/syscache.h"
+#include "storage/proc.h"
 #include "miscadmin.h"
 
 
@@ -33,8 +34,6 @@
  * the status of the last requested roleid.  The cache can be flushed
  * at need by watching for cache update events on pg_authid.
  */
-static Oid	last_roleid = InvalidOid;	/* InvalidOid == cache not valid */
-static bool last_roleid_is_super = false;
 static bool roleid_callback_registered = false;
 
 static void RoleidCallback(Datum arg, int cacheid, uint32 hashvalue);
diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c
index 04ea32f..a8c27a3 100644
--- a/src/backend/utils/mmgr/portalmem.c
+++ b/src/backend/utils/mmgr/portalmem.c
@@ -23,6 +23,7 @@
 #include "commands/portalcmds.h"
 #include "miscadmin.h"
 #include "storage/ipc.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 #include "utils/memutils.h"
 #include "utils/snapmgr.h"
@@ -53,11 +54,14 @@ typedef struct portalhashent
 
 static HTAB *PortalHashTable = NULL;
 
+#define CurrentPortalHashTable() \
+	(ActiveSession ? ActiveSession->portals : PortalHashTable)
+
 #define PortalHashTableLookup(NAME, PORTAL) \
 do { \
 	PortalHashEnt *hentry; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   (NAME), HASH_FIND, NULL); \
 	if (hentry) \
 		PORTAL = hentry->portal; \
@@ -69,7 +73,7 @@ do { \
 do { \
 	PortalHashEnt *hentry; bool found; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   (NAME), HASH_ENTER, &found); \
 	if (found) \
 		elog(ERROR, "duplicate portal name"); \
@@ -82,7 +86,7 @@ do { \
 do { \
 	PortalHashEnt *hentry; \
 	\
-	hentry = (PortalHashEnt *) hash_search(PortalHashTable, \
+	hentry = (PortalHashEnt *) hash_search(CurrentPortalHashTable(), \
 										   PORTAL->name, HASH_REMOVE, NULL); \
 	if (hentry == NULL) \
 		elog(WARNING, "trying to delete portal name that does not exist"); \
@@ -90,12 +94,33 @@ do { \
 
 static MemoryContext TopPortalContext = NULL;
 
-
 /* ----------------------------------------------------------------
  *				   public portal interface functions
  * ----------------------------------------------------------------
  */
 
+HTAB *
+CreatePortalsHashTable(MemoryContext mcxt)
+{
+	HASHCTL		ctl;
+	int			flags = HASH_ELEM;
+
+	ctl.keysize = MAX_PORTALNAME_LEN;
+	ctl.entrysize = sizeof(PortalHashEnt);
+
+	if (mcxt)
+	{
+		ctl.hcxt = mcxt;
+		flags |= HASH_CONTEXT;
+	}
+
+	/*
+	 * use PORTALS_PER_USER as a guess of how many hash table entries to
+	 * create, initially
+	 */
+	return hash_create("Portal hash", PORTALS_PER_USER, &ctl, flags);
+}
+
 /*
  * EnablePortalManager
  *		Enables the portal management module at backend startup.
@@ -103,23 +128,13 @@ static MemoryContext TopPortalContext = NULL;
 void
 EnablePortalManager(void)
 {
-	HASHCTL		ctl;
-
 	Assert(TopPortalContext == NULL);
 
 	TopPortalContext = AllocSetContextCreate(TopMemoryContext,
-											 "TopPortalContext",
-											 ALLOCSET_DEFAULT_SIZES);
-
-	ctl.keysize = MAX_PORTALNAME_LEN;
-	ctl.entrysize = sizeof(PortalHashEnt);
+										 "TopPortalContext",
+										 ALLOCSET_DEFAULT_SIZES);
 
-	/*
-	 * use PORTALS_PER_USER as a guess of how many hash table entries to
-	 * create, initially
-	 */
-	PortalHashTable = hash_create("Portal hash", PORTALS_PER_USER,
-								  &ctl, HASH_ELEM);
+	PortalHashTable = CreatePortalsHashTable(NULL);
 }
 
 /*
@@ -602,11 +617,14 @@ PortalHashTableDeleteAll(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	if (PortalHashTable == NULL)
+	htab = CurrentPortalHashTable();
+
+	if (htab == NULL)
 		return;
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 	while ((hentry = hash_seq_search(&status)) != NULL)
 	{
 		Portal		portal = hentry->portal;
@@ -619,7 +637,7 @@ PortalHashTableDeleteAll(void)
 
 		/* Restart the iteration in case that led to other drops */
 		hash_seq_term(&status);
-		hash_seq_init(&status, PortalHashTable);
+		hash_seq_init(&status, htab);
 	}
 }
 
@@ -672,8 +690,10 @@ PreCommit_Portals(bool isPrepare)
 	bool		result = false;
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -746,7 +766,7 @@ PreCommit_Portals(bool isPrepare)
 		 * caused a drop of the next portal in the hash chain.
 		 */
 		hash_seq_term(&status);
-		hash_seq_init(&status, PortalHashTable);
+		hash_seq_init(&status, htab);
 	}
 
 	return result;
@@ -763,8 +783,11 @@ AtAbort_Portals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -840,8 +863,11 @@ AtCleanup_Portals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -899,8 +925,10 @@ PortalErrorCleanup(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
 
-	hash_seq_init(&status, PortalHashTable);
+	htab = CurrentPortalHashTable();
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -927,8 +955,9 @@ AtSubCommit_Portals(SubTransactionId mySubid,
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -962,8 +991,11 @@ AtSubAbort_Portals(SubTransactionId mySubid,
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1072,8 +1104,9 @@ AtSubCleanup_Portals(SubTransactionId mySubid)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1161,7 +1194,7 @@ pg_cursor(PG_FUNCTION_ARGS)
 	/* generate junk in short-term context */
 	MemoryContextSwitchTo(oldcontext);
 
-	hash_seq_init(&hash_seq, PortalHashTable);
+	hash_seq_init(&hash_seq, CurrentPortalHashTable());
 	while ((hentry = hash_seq_search(&hash_seq)) != NULL)
 	{
 		Portal		portal = hentry->portal;
@@ -1200,7 +1233,7 @@ ThereAreNoReadyPortals(void)
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, CurrentPortalHashTable());
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
@@ -1229,8 +1262,11 @@ HoldPinnedPortals(void)
 {
 	HASH_SEQ_STATUS status;
 	PortalHashEnt *hentry;
+	HTAB		  *htab;
+
+	htab = CurrentPortalHashTable();
 
-	hash_seq_init(&status, PortalHashTable);
+	hash_seq_init(&status, htab);
 
 	while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL)
 	{
diff --git a/src/include/catalog/namespace.h b/src/include/catalog/namespace.h
index 0e20237..ddcc3c8 100644
--- a/src/include/catalog/namespace.h
+++ b/src/include/catalog/namespace.h
@@ -144,7 +144,9 @@ extern void GetTempNamespaceState(Oid *tempNamespaceId,
 					  Oid *tempToastNamespaceId);
 extern void SetTempNamespaceState(Oid tempNamespaceId,
 					  Oid tempToastNamespaceId);
-extern void ResetTempTableNamespace(void);
+
+struct SessionContext;
+extern void ResetTempTableNamespace(Oid npc);
 
 extern OverrideSearchPath *GetOverrideSearchPath(MemoryContext context);
 extern OverrideSearchPath *CopyOverrideSearchPath(OverrideSearchPath *path);
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a146510..0ad559c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5202,6 +5202,9 @@
 { oid => '2026', descr => 'statistics: current backend PID',
   proname => 'pg_backend_pid', provolatile => 's', proparallel => 'r',
   prorettype => 'int4', proargtypes => '', prosrc => 'pg_backend_pid' },
+{ oid => '3436', descr => 'statistics: current session ID',
+  proname => 'pg_session_id', provolatile => 's', proparallel => 'r',
+  prorettype => 'int4', proargtypes => '', prosrc => 'pg_session_id' },
 { oid => '1937', descr => 'statistics: PID of backend',
   proname => 'pg_stat_get_backend_pid', provolatile => 's', proparallel => 'r',
   prorettype => 'int4', proargtypes => 'int4',
@@ -10206,4 +10209,11 @@
   proisstrict => 'f', prorettype => 'bool', proargtypes => 'oid int4 int4 any',
   proargmodes => '{i,i,i,v}', prosrc => 'satisfies_hash_partition' },
 
+
+# Builtin connection pol functions
+{ oid => '6107', descr => 'Session pool backend load average',
+  proname => 'pg_backend_load_average', 
+  provolatile => 'v', prorettype => 'float8', proargtypes => 'int4',
+  prosrc => 'pg_backend_load_average' },
+
 ]
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..fdf1854 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern void DropSessionPreparedStatements(uint32 sessionId);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index ef5528c..bb6d359 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -66,6 +66,7 @@ typedef struct
 #include "datatype/timestamp.h"
 #include "libpq/hba.h"
 #include "libpq/pqcomm.h"
+#include "storage/latch.h"
 
 
 typedef enum CAC_state
@@ -139,6 +140,12 @@ typedef struct Port
 	List	   *guc_options;
 
 	/*
+	 * libpq communication state
+	 */
+	void			*pqcomm_state;
+	WaitEventSet	*pqcomm_waitset;
+
+	/*
 	 * Information that needs to be held during the authentication cycle.
 	 */
 	HbaLine    *hba;
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 36baf6b..10ba28b 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -60,7 +60,12 @@ extern int	StreamConnection(pgsocket server_fd, Port *port);
 extern void StreamClose(pgsocket sock);
 extern void TouchSocketFiles(void);
 extern void RemoveSocketFiles(void);
-extern void pq_init(void);
+extern void *pq_init(MemoryContext mcxt);
+extern void pq_reset(void);
+extern void pq_set_current_state(void *state, Port *port, WaitEventSet *set);
+extern WaitEventSet *pq_get_current_waitset(void);
+extern WaitEventSet *pq_create_backend_event_set(MemoryContext mcxt,
+												 Port *port, bool onlySock);
 extern int	pq_getbytes(char *s, size_t len);
 extern int	pq_getstring(StringInfo s);
 extern void pq_startmsgread(void);
@@ -71,6 +76,7 @@ extern int	pq_getbyte(void);
 extern int	pq_peekbyte(void);
 extern int	pq_getbyte_if_available(unsigned char *c);
 extern int	pq_putbytes(const char *s, size_t len);
+extern int  pq_available_bytes(void);
 
 /*
  * prototypes for functions in be-secure.c
@@ -96,8 +102,6 @@ extern ssize_t secure_raw_write(Port *port, const void *ptr, size_t len);
 
 extern bool ssl_loaded_verify_locations;
 
-extern WaitEventSet *FeBeWaitSet;
-
 /* GUCs */
 extern char *SSLCipherSuites;
 extern char *SSLECDHCurve;
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index e167ee8..5582542 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -26,6 +26,7 @@
 #include <signal.h>
 
 #include "pgtime.h"				/* for pg_time_t */
+#include "utils/palloc.h"
 
 
 #define InvalidPid				(-1)
@@ -150,6 +151,9 @@ extern PGDLLIMPORT bool IsUnderPostmaster;
 extern PGDLLIMPORT bool IsBackgroundWorker;
 extern PGDLLIMPORT bool IsBinaryUpgrade;
 
+extern PGDLLIMPORT bool RestartPoolerOnReload;
+extern PGDLLIMPORT char* DedicatedDatabases;
+
 extern PGDLLIMPORT bool ExitOnAnyError;
 
 extern PGDLLIMPORT char *DataDir;
@@ -161,7 +165,19 @@ extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
 
+enum SessionSchedulePolicy
+{
+	SESSION_SCHED_ROUND_ROBIN,
+	SESSION_SCHED_RANDOM,
+	SESSION_SCHED_LOAD_BALANCING
+};
+
+extern PGDLLIMPORT int MaxSessions;
+extern PGDLLIMPORT int SessionPoolSize;
+extern PGDLLIMPORT int SessionSchedule;
+
 extern PGDLLIMPORT int MyProcPid;
+extern PGDLLIMPORT uint32 MySessionId;
 extern PGDLLIMPORT pg_time_t MyStartTime;
 extern PGDLLIMPORT struct Port *MyProcPort;
 extern PGDLLIMPORT struct Latch *MyLatch;
@@ -335,6 +351,9 @@ extern void SwitchBackToLocalLatch(void);
 extern bool superuser(void);	/* current user is superuser */
 extern bool superuser_arg(Oid roleid);	/* given user is superuser */
 
+/* in utils/init/postinit.c */
+void process_settings(Oid databaseid, Oid roleid);
+
 
 /*****************************************************************************
  *	  pmod.h --																 *
@@ -425,6 +444,7 @@ extern void InitializeMaxBackends(void);
 extern void InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 			 Oid useroid, char *out_dbname, bool override_allow_connections);
 extern void BaseInit(void);
+extern void PerformAuthentication(struct Port *port);
 
 /* in utils/init/miscinit.c */
 extern bool IgnoreSystemIndexes;
@@ -445,6 +465,9 @@ extern void process_session_preload_libraries(void);
 extern void pg_bindtextdomain(const char *domain);
 extern bool has_rolreplication(Oid roleid);
 
+void *GetLocalUserIdStateCopy(MemoryContext mcxt);
+void SetCurrentUserIdState(void *userId);
+
 /* in access/transam/xlog.c */
 extern bool BackupInProgress(void);
 extern void CancelBackup(void);
diff --git a/src/include/port.h b/src/include/port.h
index 74a9dc4..ac53f3c 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -41,6 +41,10 @@ typedef SOCKET pgsocket;
 extern bool pg_set_noblock(pgsocket sock);
 extern bool pg_set_block(pgsocket sock);
 
+/* send/receive socket descriptor */
+extern int pg_send_sock(pgsocket chan, pgsocket sock, pid_t pid);
+extern pgsocket pg_recv_sock(pgsocket chan);
+
 /* Portable path handling for Unix/Win32 (in path.c) */
 
 extern bool has_drive_prefix(const char *filename);
diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h
index b398cd3..01971bc 100644
--- a/src/include/port/win32_port.h
+++ b/src/include/port/win32_port.h
@@ -447,6 +447,7 @@ extern int	pgkill(int pid, int sig);
 #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)
 #define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags)
 #define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags)
+#define socketpair(af, type, protocol, socks) pgwin32_socketpair(af, type, protocol, socks)
 
 SOCKET		pgwin32_socket(int af, int type, int protocol);
 int			pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen);
@@ -456,6 +457,7 @@ int			pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen);
 int			pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout);
 int			pgwin32_recv(SOCKET s, char *buf, int len, int flags);
 int			pgwin32_send(SOCKET s, const void *buf, int len, int flags);
+int         pgwin32_socketpair(int domain, int type, int protocol, SOCKET socks[2]);
 
 const char *pgwin32_socket_strerror(int err);
 int			pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout);
diff --git a/src/include/postmaster/connpool.h b/src/include/postmaster/connpool.h
new file mode 100644
index 0000000..45aa37c
--- /dev/null
+++ b/src/include/postmaster/connpool.h
@@ -0,0 +1,54 @@
+#ifndef CONN_POOL_H
+#define CONN_POOL_H
+
+#include "port.h"
+#include "libpq/libpq-be.h"
+
+#define MAX_CONNPOOL_WORKERS	100
+
+typedef enum
+{
+	CPW_FREE,
+	CPW_NEW_SOCKET,
+	CPW_PROCESSED
+} ConnPoolWorkerState;
+
+enum CAC_STATE;
+
+typedef struct ConnPoolWorker
+{
+	Port	   *port;		/* port in the pool */
+	int			pipes[2];	/* 0 for sending, 1 for receiving */
+
+	/* the communication procedure:
+	 * ) find a worker with state == CPW_FREE
+	 * ) assign client socket
+	 * ) add pipe to wait set (if it's not there)
+	 * ) wake up the worker.
+	 * ) process data from the worker until state != CPW_PROCESSED
+	 * ) set state to CPW_FREE
+	 * ) fork or send socket and the data to backend.
+	 *
+	 * bgworker
+	 * ) wokes up
+	 * ) check the state
+	 * ) if stats is CPW_NEW_SOCKET gets data from clientsock and
+	 * send the data through pipe to postmaster.
+	 * ) set state to CPW_PROCESSED.
+	 */
+	volatile ConnPoolWorkerState	state;
+	volatile CAC_state				cac_state;
+	pid_t							pid;
+	Latch						   *latch;
+} ConnPoolWorker;
+
+extern Size ConnPoolShmemSize(void);
+extern void ConnectionPoolWorkersInit(void);
+extern void RegisterConnPoolWorkers(void);
+extern void StartupPacketReaderMain(Datum arg);
+
+/* global variables */
+extern int NumConnPoolWorkers;
+extern ConnPoolWorker *ConnPoolWorkers;
+
+#endif
diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h
index 1877eef..1f16836 100644
--- a/src/include/postmaster/postmaster.h
+++ b/src/include/postmaster/postmaster.h
@@ -62,6 +62,10 @@ extern Size ShmemBackendArraySize(void);
 extern void ShmemBackendArrayAllocation(void);
 #endif
 
+struct Port;
+extern int	ProcessStartupPacket(struct Port *port, bool SSLdone,
+						MemoryContext memctx, int errlevel);
+
 /*
  * Note: MAX_BACKENDS is limited to 2^18-1 because that's the width reserved
  * for buffer references in buf_internals.h.  This limitation could be lifted
diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h
index 8e7c972..e4fef23 100644
--- a/src/include/storage/fd.h
+++ b/src/include/storage/fd.h
@@ -138,6 +138,7 @@ extern int	durable_rename(const char *oldfile, const char *newfile, int loglevel
 extern int	durable_unlink(const char *fname, int loglevel);
 extern int	durable_link_or_rename(const char *oldfile, const char *newfile, int loglevel);
 extern void SyncDataDirectory(void);
+extern void ReleaseLruFiles(void);
 
 /* Filename components */
 #define PG_TEMP_FILES_DIR "pgsql_tmp"
diff --git a/src/include/storage/ipc.h b/src/include/storage/ipc.h
index 6a05a89..9cddaf9 100644
--- a/src/include/storage/ipc.h
+++ b/src/include/storage/ipc.h
@@ -72,6 +72,7 @@ extern void on_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void before_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void cancel_before_shmem_exit(pg_on_exit_callback function, Datum arg);
 extern void on_exit_reset(void);
+extern void on_shmem_exit_reset(void);
 
 /* ipci.c */
 extern PGDLLIMPORT shmem_startup_hook_type shmem_startup_hook;
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index fd8735b..b7902ea 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -176,6 +176,8 @@ extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout,
 extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
 				  pgsocket sock, long timeout, uint32 wait_event_info);
 
+extern void DeleteWaitEventFromSet(WaitEventSet *set, int event_pos);
+
 /*
  * Unix implementation uses SIGUSR1 for inter-process signaling.
  * Win32 doesn't need this.
diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h
index cb613c8..29a4de2 100644
--- a/src/include/storage/proc.h
+++ b/src/include/storage/proc.h
@@ -21,6 +21,7 @@
 #include "storage/lock.h"
 #include "storage/pg_sema.h"
 #include "storage/proclist_types.h"
+#include "utils/guc_tables.h"
 
 /*
  * Each backend advertises up to PGPROC_MAX_CACHED_SUBXIDS TransactionIds
@@ -203,6 +204,10 @@ struct PGPROC
 	PGPROC	   *lockGroupLeader;	/* lock group leader, if I'm a member */
 	dlist_head	lockGroupMembers;	/* list of members, if I'm a leader */
 	dlist_node	lockGroupLink;	/* my member link, if I'm a member */
+
+	int         nFinishedSessions;  /* number of finished sessions in case of connection pooling */
+	uint64      nSessionSchedules;  /* number of session schedule performed by backend (calls of WaitEventSetWait(SessionPool->waitEvents) */
+	uint64      nReadySessions;     /* total number of ready sessions returned by all WaitEventSetWait(SessionPool->waitEvents) calls */
 };
 
 /* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
@@ -276,6 +281,58 @@ extern PGDLLIMPORT PROC_HDR *ProcGlobal;
 
 extern PGPROC *PreparedXactProcs;
 
+typedef struct SessionGUC
+{
+	struct SessionGUC	   *next;
+	config_var_value		saved;
+	struct config_generic  *var;
+} SessionGUC;
+
+/*
+ * Information associated with client session.
+ */
+typedef struct SessionContext
+{
+	uint32          magic;              /* Magic to validate content of session object */
+	uint32			id;					/* session identifier, unique across many backends */
+	/* Memory context used for global session data (instead of TopMemoryContext) */
+	MemoryContext	memory;
+	struct Port*	port;				/* connection port */
+	Oid				tempNamespace;		/* temporary namespace */
+	Oid				tempToastNamespace;	/* temporary toast namespace */
+	SessionGUC	   *gucs;				/* session local GUCs */
+	WaitEventSet   *eventSet;			/* Wait set for the session */
+	int             eventPos;           /* Position of wait socket event for this session */
+	HTAB		   *prepared_queries;	/* Session prepared queries */
+	HTAB		   *portals;			/* Session portals */
+	void		   *userId;				/* Current role state */
+	#define SessionVariable(type,name,init)  type name;
+	#include "storage/sessionvars.h"
+} SessionContext;
+
+#define SessionVariable(type,name,init)  extern type name;
+#include "storage/sessionvars.h"
+
+typedef struct Port Port;
+typedef struct BackendSessionPool
+{
+	MemoryContext	mcxt;
+
+	WaitEventSet   *waitEvents;		/* Set of all sessions sockets */
+	uint32			sessionCount;   /* Number of sessions */
+
+	/*
+	 * Reference to the original port of this backend created when this backend
+	 * was launched. Session using this port may be already terminated,
+	 * but since it is allocated in TopMemoryContext, its content is still
+	 * valid and is used as template for ports of new sessions
+	 */
+	Port		   *backendPort;
+} BackendSessionPool;
+
+extern PGDLLIMPORT SessionContext		*ActiveSession;
+extern PGDLLIMPORT BackendSessionPool	*SessionPool;
+
 /* Accessor for PGPROC given a pgprocno. */
 #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])
 
@@ -295,7 +352,7 @@ extern int	StatementTimeout;
 extern int	LockTimeout;
 extern int	IdleInTransactionSessionTimeout;
 extern bool log_lock_waits;
-
+extern bool IsDedicatedBackend;
 
 /*
  * Function Prototypes
@@ -321,6 +378,7 @@ extern void ProcLockWakeup(LockMethod lockMethodTable, LOCK *lock);
 extern void CheckDeadLockAlert(void);
 extern bool IsWaitingForLock(void);
 extern void LockErrorCleanup(void);
+extern uint32 CreateSessionId(void);
 
 extern void ProcWaitForSignal(uint32 wait_event_info);
 extern void ProcSendSignal(int pid);
diff --git a/src/include/storage/sessionvars.h b/src/include/storage/sessionvars.h
new file mode 100644
index 0000000..690c56f
--- /dev/null
+++ b/src/include/storage/sessionvars.h
@@ -0,0 +1,13 @@
+/* SessionVariable(type,name,init) */
+SessionVariable(Oid, AuthenticatedUserId, InvalidOid)
+SessionVariable(Oid, SessionUserId, InvalidOid)
+SessionVariable(Oid, OuterUserId, InvalidOid)
+SessionVariable(Oid, CurrentUserId, InvalidOid)
+SessionVariable(bool, AuthenticatedUserIsSuperuser, false)
+SessionVariable(bool, SessionUserIsSuperuser, false)
+SessionVariable(int, SecurityRestrictionContext, 0)
+SessionVariable(bool, SetRoleIsActive, false)
+SessionVariable(Oid, last_roleid, InvalidOid)
+SessionVariable(bool, last_roleid_is_super, false)
+SessionVariable(struct SeqTableData*, last_used_seq, NULL)
+#undef SessionVariable
diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h
index 63b4e48..51d130c 100644
--- a/src/include/tcop/tcopprot.h
+++ b/src/include/tcop/tcopprot.h
@@ -31,9 +31,11 @@
 #define STACK_DEPTH_SLOP (512 * 1024L)
 
 extern CommandDest whereToSendOutput;
+
 extern PGDLLIMPORT const char *debug_query_string;
 extern int	max_stack_depth;
 extern int	PostAuthDelay;
+extern pgsocket SessionPoolSock;
 
 /* GUC-configurable parameters */
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index f462eab..338f0ec 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -395,6 +395,12 @@ extern Size EstimateGUCStateSpace(void);
 extern void SerializeGUCState(Size maxsize, char *start_address);
 extern void RestoreGUCState(void *gucstate);
 
+/* Session polling support function */
+struct SessionContext;
+extern void RestoreSessionGUCs(struct SessionContext* session);
+extern void ReleaseSessionGUCs(struct SessionContext* session);
+
+
 /* Support for messages reported from GUC check hooks */
 
 extern PGDLLIMPORT char *GUC_check_errmsg_string;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 668d9ef..e3f2e5a 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -58,6 +58,7 @@ enum config_group
 	CONN_AUTH_SETTINGS,
 	CONN_AUTH_AUTH,
 	CONN_AUTH_SSL,
+	CONN_POOLING,
 	RESOURCES,
 	RESOURCES_MEM,
 	RESOURCES_DISK,
diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h
index e4929b9..69ac10d 100644
--- a/src/include/utils/portal.h
+++ b/src/include/utils/portal.h
@@ -202,6 +202,7 @@ typedef struct PortalData
 
 
 /* Prototypes for functions in utils/mmgr/portalmem.c */
+HTAB *CreatePortalsHashTable(MemoryContext mcxt);
 extern void EnablePortalManager(void);
 extern bool PreCommit_Portals(bool isPrepare);
 extern void AtAbort_Portals(void);